Test Report: Docker_Linux_containerd_arm64 12230

                    
                      b85c4fe0fcec6d00161b49ecbfd8182c89122b1a:2021-08-17:20050
                    
                

Test fail (26/241)

x
+
TestAddons/parallel/Registry (237.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 32.253256ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-9np4b" [1b228b1c-c9df-4c36-a0f4-2a2fb6ec967b] Running
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.012983402s
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-p5xh8" [0a7638e5-9e17-4626-aeb9-b7fe2abe695d] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008359212s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210817015042-1554185 delete po -l run=registry-test --now
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210817015042-1554185 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:299: (dbg) Non-zero exit: kubectl --context addons-20210817015042-1554185 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.124518773s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:301: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-20210817015042-1554185 run --rm registry-test --restart=Never --image=busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:305: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210817015042-1554185 ip
2021/08/17 01:54:25 [DEBUG] GET http://192.168.49.2:5000
2021/08/17 01:54:25 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:54:25 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/17 01:54:26 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:54:26 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/17 01:54:28 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:54:28 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/17 01:54:32 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:54:32 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/17 01:54:40 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:54:40 [DEBUG] GET http://192.168.49.2:5000
2021/08/17 01:54:40 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:54:40 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/17 01:54:41 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:54:41 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/17 01:54:43 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:54:43 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/17 01:54:47 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:54:47 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/17 01:54:55 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:54:56 [DEBUG] GET http://192.168.49.2:5000
2021/08/17 01:54:56 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:54:56 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/17 01:54:57 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:54:57 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/17 01:54:59 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:54:59 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/17 01:55:04 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:04 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/17 01:55:12 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:12 [DEBUG] GET http://192.168.49.2:5000
2021/08/17 01:55:12 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:12 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/17 01:55:13 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:13 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/17 01:55:15 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:15 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/17 01:55:19 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:19 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/17 01:55:27 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:29 [DEBUG] GET http://192.168.49.2:5000
2021/08/17 01:55:29 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:29 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/17 01:55:30 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:30 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/17 01:55:32 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:32 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/17 01:55:36 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:36 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/17 01:55:44 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:47 [DEBUG] GET http://192.168.49.2:5000
2021/08/17 01:55:47 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:47 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/17 01:55:48 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:48 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/17 01:55:50 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:50 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/17 01:55:54 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:55:54 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/17 01:56:02 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:07 [DEBUG] GET http://192.168.49.2:5000
2021/08/17 01:56:07 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:07 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/17 01:56:08 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:08 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/17 01:56:10 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:10 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/17 01:56:14 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:14 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/17 01:56:22 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:30 [DEBUG] GET http://192.168.49.2:5000
2021/08/17 01:56:30 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:30 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/17 01:56:31 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:31 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/17 01:56:33 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:33 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/17 01:56:37 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:37 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/17 01:56:45 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:54 [DEBUG] GET http://192.168.49.2:5000
2021/08/17 01:56:54 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:54 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/17 01:56:55 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:55 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/17 01:56:57 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:56:57 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/17 01:57:01 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/17 01:57:01 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/17 01:57:09 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:339: failed to check external access to http://192.168.49.2:5000: GET http://192.168.49.2:5000 giving up after 5 attempt(s): Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210817015042-1554185 addons disable registry --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210817015042-1554185
helpers_test.go:236: (dbg) docker inspect addons-20210817015042-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416",
	        "Created": "2021-08-17T01:50:49.008425565Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1555108,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T01:50:49.513909075Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/hosts",
	        "LogPath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416-json.log",
	        "Name": "/addons-20210817015042-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210817015042-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210817015042-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61/merged",
	                "UpperDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61/diff",
	                "WorkDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210817015042-1554185",
	                "Source": "/var/lib/docker/volumes/addons-20210817015042-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210817015042-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210817015042-1554185",
	                "name.minikube.sigs.k8s.io": "addons-20210817015042-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3e0a22fba78ee7873eb198b4450cb747bf4f2dc90aa87985648e04a1bfa9520",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50314"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50313"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50310"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50312"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50311"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c3e0a22fba78",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210817015042-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d0219469219e",
	                        "addons-20210817015042-1554185"
	                    ],
	                    "NetworkID": "a9a617dbec2c4687c7bfc4bea262a36b8329d70029602dc944aed84d4dfb4f83",
	                    "EndpointID": "dad39de7953aad4709a05c2c9027de032d29f0302e6751762f5bb275759d2909",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-20210817015042-1554185 -n addons-20210817015042-1554185
helpers_test.go:245: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210817015042-1554185 logs -n 25
helpers_test.go:253: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                  | download-only-20210817014929-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:28 UTC | Tue, 17 Aug 2021 01:50:28 UTC |
	| delete  | -p                                     | download-only-20210817014929-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:28 UTC | Tue, 17 Aug 2021 01:50:28 UTC |
	|         | download-only-20210817014929-1554185   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-only-20210817014929-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:28 UTC | Tue, 17 Aug 2021 01:50:28 UTC |
	|         | download-only-20210817014929-1554185   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-docker-20210817015028-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:42 UTC | Tue, 17 Aug 2021 01:50:42 UTC |
	|         | download-docker-20210817015028-1554185 |                                        |         |         |                               |                               |
	| start   | -p                                     | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:43 UTC | Tue, 17 Aug 2021 01:53:14 UTC |
	|         | addons-20210817015042-1554185          |                                        |         |         |                               |                               |
	|         | --wait=true --memory=4000              |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --addons=registry                      |                                        |         |         |                               |                               |
	|         | --addons=metrics-server                |                                        |         |         |                               |                               |
	|         | --addons=olm                           |                                        |         |         |                               |                               |
	|         | --addons=volumesnapshots               |                                        |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver           |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --addons=ingress                       |                                        |         |         |                               |                               |
	|         | --addons=gcp-auth                      |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:54:25 UTC | Tue, 17 Aug 2021 01:54:25 UTC |
	|         | ip                                     |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:57:09 UTC | Tue, 17 Aug 2021 01:57:10 UTC |
	|         | addons disable registry                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 01:50:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 01:50:43.004283 1554672 out.go:298] Setting OutFile to fd 1 ...
	I0817 01:50:43.004408 1554672 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:50:43.004431 1554672 out.go:311] Setting ErrFile to fd 2...
	I0817 01:50:43.004441 1554672 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:50:43.004581 1554672 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 01:50:43.004871 1554672 out.go:305] Setting JSON to false
	I0817 01:50:43.005775 1554672 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34381,"bootTime":1629130662,"procs":416,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 01:50:43.005843 1554672 start.go:121] virtualization:  
	I0817 01:50:43.008113 1554672 out.go:177] * [addons-20210817015042-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 01:50:43.010059 1554672 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 01:50:43.009081 1554672 notify.go:169] Checking for updates...
	I0817 01:50:43.011571 1554672 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 01:50:43.013130 1554672 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 01:50:43.014848 1554672 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 01:50:43.015025 1554672 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 01:50:43.049197 1554672 docker.go:132] docker version: linux-20.10.8
	I0817 01:50:43.049279 1554672 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:50:43.144133 1554672 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:50:43.088038469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:50:43.144227 1554672 docker.go:244] overlay module found
	I0817 01:50:43.146324 1554672 out.go:177] * Using the docker driver based on user configuration
	I0817 01:50:43.146348 1554672 start.go:278] selected driver: docker
	I0817 01:50:43.146353 1554672 start.go:751] validating driver "docker" against <nil>
	I0817 01:50:43.146367 1554672 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 01:50:43.146408 1554672 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 01:50:43.146423 1554672 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 01:50:43.147842 1554672 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 01:50:43.148132 1554672 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:50:43.222251 1554672 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:50:43.17341921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:50:43.222365 1554672 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0817 01:50:43.222521 1554672 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 01:50:43.222542 1554672 cni.go:93] Creating CNI manager for ""
	I0817 01:50:43.222549 1554672 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:50:43.222565 1554672 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 01:50:43.222570 1554672 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 01:50:43.222582 1554672 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 01:50:43.222589 1554672 start_flags.go:277] config:
	{Name:addons-20210817015042-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 01:50:43.224429 1554672 out.go:177] * Starting control plane node addons-20210817015042-1554185 in cluster addons-20210817015042-1554185
	I0817 01:50:43.224467 1554672 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 01:50:43.226166 1554672 out.go:177] * Pulling base image ...
	I0817 01:50:43.226186 1554672 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:50:43.226218 1554672 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 01:50:43.226230 1554672 cache.go:56] Caching tarball of preloaded images
	I0817 01:50:43.226359 1554672 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 01:50:43.226380 1554672 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 01:50:43.226662 1554672 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/config.json ...
	I0817 01:50:43.226688 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/config.json: {Name:mk832a7647425177a5f2be8874629457bb58883b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:50:43.226846 1554672 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 01:50:43.267020 1554672 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 01:50:43.267048 1554672 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 01:50:43.267060 1554672 cache.go:205] Successfully downloaded all kic artifacts
	I0817 01:50:43.267095 1554672 start.go:313] acquiring machines lock for addons-20210817015042-1554185: {Name:mkc848aa47e63f497fa6d048b39bc33e9d106216 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 01:50:43.267208 1554672 start.go:317] acquired machines lock for "addons-20210817015042-1554185" in 92.061µs
	I0817 01:50:43.267235 1554672 start.go:89] Provisioning new machine with config: &{Name:addons-20210817015042-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 01:50:43.267309 1554672 start.go:126] createHost starting for "" (driver="docker")
	I0817 01:50:43.269344 1554672 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0817 01:50:43.269558 1554672 start.go:160] libmachine.API.Create for "addons-20210817015042-1554185" (driver="docker")
	I0817 01:50:43.269585 1554672 client.go:168] LocalClient.Create starting
	I0817 01:50:43.269667 1554672 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0817 01:50:43.834992 1554672 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0817 01:50:44.271080 1554672 cli_runner.go:115] Run: docker network inspect addons-20210817015042-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 01:50:44.298072 1554672 cli_runner.go:162] docker network inspect addons-20210817015042-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 01:50:44.298133 1554672 network_create.go:255] running [docker network inspect addons-20210817015042-1554185] to gather additional debugging logs...
	I0817 01:50:44.298149 1554672 cli_runner.go:115] Run: docker network inspect addons-20210817015042-1554185
	W0817 01:50:44.324372 1554672 cli_runner.go:162] docker network inspect addons-20210817015042-1554185 returned with exit code 1
	I0817 01:50:44.324396 1554672 network_create.go:258] error running [docker network inspect addons-20210817015042-1554185]: docker network inspect addons-20210817015042-1554185: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210817015042-1554185
	I0817 01:50:44.324409 1554672 network_create.go:260] output of [docker network inspect addons-20210817015042-1554185]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210817015042-1554185
	
	** /stderr **
	I0817 01:50:44.324473 1554672 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 01:50:44.351093 1554672 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x40005be280] misses:0}
	I0817 01:50:44.351140 1554672 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 01:50:44.351162 1554672 network_create.go:106] attempt to create docker network addons-20210817015042-1554185 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 01:50:44.351211 1554672 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210817015042-1554185
	I0817 01:50:44.413803 1554672 network_create.go:90] docker network addons-20210817015042-1554185 192.168.49.0/24 created
	I0817 01:50:44.413829 1554672 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210817015042-1554185" container
	I0817 01:50:44.413892 1554672 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0817 01:50:44.440106 1554672 cli_runner.go:115] Run: docker volume create addons-20210817015042-1554185 --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --label created_by.minikube.sigs.k8s.io=true
	I0817 01:50:44.467518 1554672 oci.go:102] Successfully created a docker volume addons-20210817015042-1554185
	I0817 01:50:44.467581 1554672 cli_runner.go:115] Run: docker run --rm --name addons-20210817015042-1554185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --entrypoint /usr/bin/test -v addons-20210817015042-1554185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0817 01:50:48.841251 1554672 cli_runner.go:168] Completed: docker run --rm --name addons-20210817015042-1554185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --entrypoint /usr/bin/test -v addons-20210817015042-1554185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (4.373634594s)
	I0817 01:50:48.841276 1554672 oci.go:106] Successfully prepared a docker volume addons-20210817015042-1554185
	W0817 01:50:48.841301 1554672 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0817 01:50:48.841310 1554672 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0817 01:50:48.841360 1554672 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 01:50:48.841549 1554672 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:50:48.841570 1554672 kic.go:179] Starting extracting preloaded images to volume ...
	I0817 01:50:48.841627 1554672 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210817015042-1554185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 01:50:48.971581 1554672 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210817015042-1554185 --name addons-20210817015042-1554185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210817015042-1554185 --network addons-20210817015042-1554185 --ip 192.168.49.2 --volume addons-20210817015042-1554185:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0817 01:50:49.523596 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Running}}
	I0817 01:50:49.590786 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:50:49.633119 1554672 cli_runner.go:115] Run: docker exec addons-20210817015042-1554185 stat /var/lib/dpkg/alternatives/iptables
	I0817 01:50:49.741896 1554672 oci.go:278] the created container "addons-20210817015042-1554185" has a running status.
	I0817 01:50:49.741921 1554672 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa...
	I0817 01:50:50.532064 1554672 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 01:50:50.667778 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:50:50.707368 1554672 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 01:50:50.707384 1554672 kic_runner.go:115] Args: [docker exec --privileged addons-20210817015042-1554185 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 01:51:00.466206 1554672 kic_runner.go:124] Done: [docker exec --privileged addons-20210817015042-1554185 chown docker:docker /home/docker/.ssh/authorized_keys]: (9.758798263s)
	I0817 01:51:02.783214 1554672 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210817015042-1554185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (13.941553277s)
	I0817 01:51:02.783245 1554672 kic.go:188] duration metric: took 13.941672 seconds to extract preloaded images to volume
	I0817 01:51:02.783324 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:02.814748 1554672 machine.go:88] provisioning docker machine ...
	I0817 01:51:02.814781 1554672 ubuntu.go:169] provisioning hostname "addons-20210817015042-1554185"
	I0817 01:51:02.814865 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:02.842333 1554672 main.go:130] libmachine: Using SSH client type: native
	I0817 01:51:02.842498 1554672 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50314 <nil> <nil>}
	I0817 01:51:02.842516 1554672 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210817015042-1554185 && echo "addons-20210817015042-1554185" | sudo tee /etc/hostname
	I0817 01:51:02.970606 1554672 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210817015042-1554185
	
	I0817 01:51:02.970693 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:02.999373 1554672 main.go:130] libmachine: Using SSH client type: native
	I0817 01:51:02.999533 1554672 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50314 <nil> <nil>}
	I0817 01:51:02.999560 1554672 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210817015042-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210817015042-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210817015042-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 01:51:03.114034 1554672 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 01:51:03.114055 1554672 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 01:51:03.114074 1554672 ubuntu.go:177] setting up certificates
	I0817 01:51:03.114082 1554672 provision.go:83] configureAuth start
	I0817 01:51:03.114135 1554672 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210817015042-1554185
	I0817 01:51:03.141579 1554672 provision.go:138] copyHostCerts
	I0817 01:51:03.141653 1554672 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 01:51:03.141736 1554672 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 01:51:03.141784 1554672 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 01:51:03.141822 1554672 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.addons-20210817015042-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210817015042-1554185]
	I0817 01:51:03.398920 1554672 provision.go:172] copyRemoteCerts
	I0817 01:51:03.398968 1554672 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 01:51:03.399007 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.426820 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.508566 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 01:51:03.525114 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0817 01:51:03.539071 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 01:51:03.553109 1554672 provision.go:86] duration metric: configureAuth took 439.012307ms
	I0817 01:51:03.553124 1554672 ubuntu.go:193] setting minikube options for container-runtime
	I0817 01:51:03.553268 1554672 config.go:177] Loaded profile config "addons-20210817015042-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 01:51:03.553275 1554672 machine.go:91] provisioned docker machine in 738.505134ms
	I0817 01:51:03.553280 1554672 client.go:171] LocalClient.Create took 20.283690224s
	I0817 01:51:03.553289 1554672 start.go:168] duration metric: libmachine.API.Create for "addons-20210817015042-1554185" took 20.283731225s
	I0817 01:51:03.553296 1554672 start.go:267] post-start starting for "addons-20210817015042-1554185" (driver="docker")
	I0817 01:51:03.553301 1554672 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 01:51:03.553340 1554672 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 01:51:03.553372 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.581866 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.664711 1554672 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 01:51:03.667021 1554672 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 01:51:03.667044 1554672 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 01:51:03.667055 1554672 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 01:51:03.667073 1554672 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 01:51:03.667081 1554672 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 01:51:03.667131 1554672 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 01:51:03.667155 1554672 start.go:270] post-start completed in 113.85344ms
	I0817 01:51:03.667437 1554672 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210817015042-1554185
	I0817 01:51:03.695177 1554672 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/config.json ...
	I0817 01:51:03.695366 1554672 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 01:51:03.695414 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.722965 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.802744 1554672 start.go:129] duration metric: createHost completed in 20.535424588s
	I0817 01:51:03.802761 1554672 start.go:80] releasing machines lock for "addons-20210817015042-1554185", held for 20.535539837s
	I0817 01:51:03.802834 1554672 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210817015042-1554185
	I0817 01:51:03.830388 1554672 ssh_runner.go:149] Run: systemctl --version
	I0817 01:51:03.830437 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.830658 1554672 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 01:51:03.830713 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.864441 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.872939 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.950680 1554672 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 01:51:04.148514 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 01:51:04.156921 1554672 docker.go:153] disabling docker service ...
	I0817 01:51:04.156964 1554672 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 01:51:04.172287 1554672 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 01:51:04.180567 1554672 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 01:51:04.253873 1554672 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 01:51:04.337794 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 01:51:04.346079 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 01:51:04.356986 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 01:51:04.369213 1554672 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 01:51:04.375739 1554672 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 01:51:04.381264 1554672 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 01:51:04.455762 1554672 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 01:51:04.531663 1554672 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 01:51:04.531729 1554672 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 01:51:04.535130 1554672 start.go:413] Will wait 60s for crictl version
	I0817 01:51:04.535189 1554672 ssh_runner.go:149] Run: sudo crictl version
	I0817 01:51:04.564551 1554672 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T01:51:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 01:51:15.611398 1554672 ssh_runner.go:149] Run: sudo crictl version
	I0817 01:51:15.634965 1554672 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 01:51:15.635034 1554672 ssh_runner.go:149] Run: containerd --version
	I0817 01:51:15.656211 1554672 ssh_runner.go:149] Run: containerd --version
	I0817 01:51:15.679165 1554672 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 01:51:15.679262 1554672 cli_runner.go:115] Run: docker network inspect addons-20210817015042-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 01:51:15.708112 1554672 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 01:51:15.711074 1554672 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 01:51:15.720057 1554672 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:51:15.720115 1554672 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 01:51:15.753630 1554672 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 01:51:15.753654 1554672 containerd.go:517] Images already preloaded, skipping extraction
	I0817 01:51:15.753696 1554672 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 01:51:15.775284 1554672 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 01:51:15.775306 1554672 cache_images.go:74] Images are preloaded, skipping loading
	I0817 01:51:15.775376 1554672 ssh_runner.go:149] Run: sudo crictl info
	I0817 01:51:15.796264 1554672 cni.go:93] Creating CNI manager for ""
	I0817 01:51:15.796286 1554672 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:51:15.796297 1554672 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 01:51:15.796310 1554672 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210817015042-1554185 NodeName:addons-20210817015042-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 01:51:15.796446 1554672 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "addons-20210817015042-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 01:51:15.796533 1554672 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-20210817015042-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 01:51:15.796591 1554672 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 01:51:15.802721 1554672 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 01:51:15.802788 1554672 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 01:51:15.808456 1554672 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (574 bytes)
	I0817 01:51:15.819782 1554672 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 01:51:15.830993 1554672 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0817 01:51:15.841895 1554672 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 01:51:15.844431 1554672 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 01:51:15.852834 1554672 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185 for IP: 192.168.49.2
	I0817 01:51:15.852892 1554672 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 01:51:16.232897 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt ...
	I0817 01:51:16.232924 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt: {Name:mkc452a3ca463d1cef7aa1398b1abd9dddd24545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.233112 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key ...
	I0817 01:51:16.233129 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key: {Name:mkb1c0cc6e35e952c8fa312da56d58ae26957187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.233218 1554672 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 01:51:16.929155 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt ...
	I0817 01:51:16.929187 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt: {Name:mk17a5a660a62b953e570d93eac621069f930efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.929368 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key ...
	I0817 01:51:16.929384 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key: {Name:mk40bf80fb6d166c627fea37bd45ce901649a411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.929516 1554672 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.key
	I0817 01:51:16.929537 1554672 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt with IP's: []
	I0817 01:51:17.141841 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt ...
	I0817 01:51:17.141869 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: {Name:mk127978c85cd8b22e7e4466afd86c3104950f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.142041 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.key ...
	I0817 01:51:17.142056 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.key: {Name:mk9c80a73b58e8a5fc9e3f4aca38da7b4d098319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.142143 1554672 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2
	I0817 01:51:17.142152 1554672 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 01:51:17.697755 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2 ...
	I0817 01:51:17.697786 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2: {Name:mk68739490e6778fecd80380c013c3c92d6d4458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.698773 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2 ...
	I0817 01:51:17.698790 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2: {Name:mk844eac0cbe48c9235e9d8a8ec3aa0d9a836734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.698894 1554672 certs.go:308] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt
	I0817 01:51:17.698954 1554672 certs.go:312] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key
	I0817 01:51:17.699002 1554672 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key
	I0817 01:51:17.699012 1554672 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt with IP's: []
	I0817 01:51:18.551109 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt ...
	I0817 01:51:18.551144 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt: {Name:mkb606f4652991a4936ad1fb4f336e911d7af05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:18.551327 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key ...
	I0817 01:51:18.551342 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key: {Name:mkddf28d3df3bc53b2858cabdc2cbc08941228fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:18.551516 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 01:51:18.551557 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 01:51:18.551586 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 01:51:18.551613 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 01:51:18.554164 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 01:51:18.569715 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 01:51:18.584425 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 01:51:18.598873 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 01:51:18.613294 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 01:51:18.627638 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 01:51:18.642450 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 01:51:18.657110 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 01:51:18.671462 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 01:51:18.686137 1554672 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 01:51:18.696974 1554672 ssh_runner.go:149] Run: openssl version
	I0817 01:51:18.701232 1554672 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 01:51:18.707560 1554672 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 01:51:18.710430 1554672 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 01:51:18.710492 1554672 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 01:51:18.714912 1554672 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 01:51:18.721735 1554672 kubeadm.go:390] StartCluster: {Name:addons-20210817015042-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 01:51:18.721819 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 01:51:18.721874 1554672 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 01:51:18.749686 1554672 cri.go:76] found id: ""
	I0817 01:51:18.749758 1554672 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 01:51:18.755843 1554672 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 01:51:18.761633 1554672 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 01:51:18.761681 1554672 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 01:51:18.767360 1554672 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 01:51:18.767403 1554672 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 01:51:44.489308 1554672 out.go:204]   - Generating certificates and keys ...
	I0817 01:51:44.492258 1554672 out.go:204]   - Booting up control plane ...
	I0817 01:51:44.495405 1554672 out.go:204]   - Configuring RBAC rules ...
	I0817 01:51:44.497771 1554672 cni.go:93] Creating CNI manager for ""
	I0817 01:51:44.497802 1554672 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:51:44.499744 1554672 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 01:51:44.499923 1554672 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 01:51:44.514536 1554672 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 01:51:44.514555 1554672 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 01:51:44.537490 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 01:51:45.293207 1554672 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 01:51:45.293283 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:45.293354 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=addons-20210817015042-1554185 minikube.k8s.io/updated_at=2021_08_17T01_51_45_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:45.441142 1554672 ops.go:34] apiserver oom_adj: -16
	I0817 01:51:45.441307 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:46.028917 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:46.528512 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:47.028526 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:47.529129 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:48.028453 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:48.529207 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:49.029151 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:49.528902 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:50.028980 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:50.528509 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:51.028493 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:51.528957 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:52.028487 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:52.529123 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:53.029078 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:53.528513 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:54.029046 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:54.529488 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:55.029473 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:55.529461 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:56.029173 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:56.529368 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:57.028522 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:57.528583 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:57.667234 1554672 kubeadm.go:985] duration metric: took 12.373989422s to wait for elevateKubeSystemPrivileges.
	I0817 01:51:57.667260 1554672 kubeadm.go:392] StartCluster complete in 38.945530358s
	I0817 01:51:57.667277 1554672 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:57.667387 1554672 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 01:51:57.667820 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:58.208648 1554672 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210817015042-1554185" rescaled to 1
	I0817 01:51:58.208753 1554672 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 01:51:58.211352 1554672 out.go:177] * Verifying Kubernetes components...
	I0817 01:51:58.211399 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 01:51:58.208813 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 01:51:58.208883 1554672 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I0817 01:51:58.211533 1554672 addons.go:59] Setting volumesnapshots=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.211549 1554672 addons.go:135] Setting addon volumesnapshots=true in "addons-20210817015042-1554185"
	I0817 01:51:58.211576 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.212086 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.212136 1554672 addons.go:59] Setting ingress=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.212151 1554672 addons.go:135] Setting addon ingress=true in "addons-20210817015042-1554185"
	I0817 01:51:58.212176 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.212566 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.212695 1554672 addons.go:59] Setting metrics-server=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.212709 1554672 addons.go:135] Setting addon metrics-server=true in "addons-20210817015042-1554185"
	I0817 01:51:58.212725 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.213112 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.213170 1554672 addons.go:59] Setting olm=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.213183 1554672 addons.go:135] Setting addon olm=true in "addons-20210817015042-1554185"
	I0817 01:51:58.213199 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.213584 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.213633 1554672 addons.go:59] Setting registry=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.213647 1554672 addons.go:135] Setting addon registry=true in "addons-20210817015042-1554185"
	I0817 01:51:58.213662 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.214028 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.214076 1554672 addons.go:59] Setting storage-provisioner=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.214086 1554672 addons.go:135] Setting addon storage-provisioner=true in "addons-20210817015042-1554185"
	W0817 01:51:58.214091 1554672 addons.go:147] addon storage-provisioner should already be in state true
	I0817 01:51:58.214110 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.214476 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.214533 1554672 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.214556 1554672 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210817015042-1554185"
	I0817 01:51:58.214578 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.216636 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.209046 1554672 config.go:177] Loaded profile config "addons-20210817015042-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 01:51:58.236087 1554672 addons.go:59] Setting default-storageclass=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.236110 1554672 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210817015042-1554185"
	I0817 01:51:58.236416 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.236497 1554672 addons.go:59] Setting gcp-auth=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.236511 1554672 mustload.go:65] Loading cluster: addons-20210817015042-1554185
	I0817 01:51:58.236645 1554672 config.go:177] Loaded profile config "addons-20210817015042-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 01:51:58.236850 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.386147 1554672 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0817 01:51:58.387928 1554672 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0817 01:51:58.390873 1554672 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0817 01:51:58.390923 1554672 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0817 01:51:58.390932 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0817 01:51:58.390988 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.537862 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0817 01:51:58.539571 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0817 01:51:58.541491 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0817 01:51:58.545257 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0817 01:51:58.547105 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0817 01:51:58.548521 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0817 01:51:58.548575 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0817 01:51:58.548588 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0817 01:51:58.550068 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0817 01:51:58.548640 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.554947 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0817 01:51:58.556578 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0817 01:51:58.558141 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0817 01:51:58.558186 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0817 01:51:58.558193 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0817 01:51:58.558233 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.584190 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:58.586499 1554672 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0817 01:51:58.586556 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 01:51:58.586564 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 01:51:58.586607 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.586985 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 01:51:58.588594 1554672 node_ready.go:35] waiting up to 6m0s for node "addons-20210817015042-1554185" to be "Ready" ...
	I0817 01:51:58.646058 1554672 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0817 01:51:58.647847 1554672 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0817 01:51:58.697505 1554672 out.go:177]   - Using image registry:2.7.1
	I0817 01:51:58.699108 1554672 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0817 01:51:58.699188 1554672 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0817 01:51:58.699196 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0817 01:51:58.699248 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.738566 1554672 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 01:51:58.738649 1554672 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 01:51:58.738662 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 01:51:58.738710 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.776023 1554672 addons.go:135] Setting addon default-storageclass=true in "addons-20210817015042-1554185"
	W0817 01:51:58.776048 1554672 addons.go:147] addon default-storageclass should already be in state true
	I0817 01:51:58.776074 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.776526 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.805011 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.875835 1554672 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0817 01:51:58.875902 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0817 01:51:58.876004 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.903641 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:58.916757 1554672 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0817 01:51:58.916831 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.922593 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:58.927465 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.028544 1554672 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 01:51:59.028564 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 01:51:59.028615 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:59.050931 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.052785 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.078548 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.103856 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.136455 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.164633 1554672 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0817 01:51:59.164654 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0817 01:51:59.294730 1554672 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0817 01:51:59.295256 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0817 01:51:59.334238 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0817 01:51:59.362229 1554672 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0817 01:51:59.362285 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0817 01:51:59.419723 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 01:51:59.419773 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0817 01:51:59.430396 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 01:51:59.439975 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0817 01:51:59.440022 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0817 01:51:59.457816 1554672 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0817 01:51:59.457862 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0817 01:51:59.478937 1554672 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0817 01:51:59.484531 1554672 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0817 01:51:59.484544 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0817 01:51:59.492438 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 01:51:59.516776 1554672 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0817 01:51:59.516819 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0817 01:51:59.533786 1554672 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0817 01:51:59.533830 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0817 01:51:59.538721 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 01:51:59.538765 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 01:51:59.551896 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0817 01:51:59.551933 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0817 01:51:59.577593 1554672 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0817 01:51:59.577637 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0817 01:51:59.586368 1554672 addons.go:135] Setting addon gcp-auth=true in "addons-20210817015042-1554185"
	I0817 01:51:59.586439 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:59.586992 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:59.602216 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0817 01:51:59.643183 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0817 01:51:59.643200 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0817 01:51:59.643758 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 01:51:59.643772 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 01:51:59.653214 1554672 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.6
	I0817 01:51:59.654750 1554672 out.go:177]   - Using image jettech/kube-webhook-certgen:v1.3.0
	I0817 01:51:59.654796 1554672 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0817 01:51:59.654803 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0817 01:51:59.654918 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:59.662952 1554672 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.075944471s)
	I0817 01:51:59.662971 1554672 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0817 01:51:59.675174 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0817 01:51:59.683436 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0817 01:51:59.683451 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0817 01:51:59.698631 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0817 01:51:59.698646 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0817 01:51:59.718898 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.744738 1554672 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 01:51:59.744759 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0817 01:51:59.753993 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0817 01:51:59.754011 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0817 01:51:59.768957 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 01:51:59.806478 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0817 01:51:59.806499 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0817 01:51:59.841542 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 01:51:59.896907 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0817 01:51:59.896928 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0817 01:52:00.017093 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0817 01:52:00.017114 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0817 01:52:00.098144 1554672 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0817 01:52:00.098165 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (770 bytes)
	I0817 01:52:00.148128 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0817 01:52:00.148150 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0817 01:52:00.212978 1554672 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 01:52:00.213000 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4755 bytes)
	I0817 01:52:00.240295 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0817 01:52:00.240316 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0817 01:52:00.328162 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 01:52:00.392480 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0817 01:52:00.392504 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0817 01:52:00.475797 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 01:52:00.475819 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0817 01:52:00.588665 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 01:52:00.613870 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:01.300912 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.870464987s)
	I0817 01:52:01.300955 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (1.966695194s)
	I0817 01:52:01.300964 1554672 addons.go:313] Verifying addon ingress=true in "addons-20210817015042-1554185"
	I0817 01:52:01.302793 1554672 out.go:177] * Verifying ingress addon...
	I0817 01:52:01.301217 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.808764624s)
	I0817 01:52:01.304580 1554672 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0817 01:52:01.324823 1554672 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0817 01:52:01.324869 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:01.866150 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:02.455106 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:02.756616 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:02.904020 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:03.374970 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:03.900784 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:04.389210 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:04.828604 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:05.113059 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:05.328501 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:05.828619 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:06.329237 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:06.849401 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.17419898s)
	I0817 01:52:06.849430 1554672 addons.go:313] Verifying addon registry=true in "addons-20210817015042-1554185"
	I0817 01:52:06.851610 1554672 out.go:177] * Verifying registry addon...
	I0817 01:52:06.849712 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.080729919s)
	I0817 01:52:06.849894 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (7.247660472s)
	I0817 01:52:06.850006 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.008434003s)
	I0817 01:52:06.850075 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (6.521890577s)
	I0817 01:52:06.853580 1554672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0817 01:52:06.853656 1554672 addons.go:313] Verifying addon metrics-server=true in "addons-20210817015042-1554185"
	W0817 01:52:06.853699 1554672 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0817 01:52:06.853907 1554672 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	W0817 01:52:06.853735 1554672 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0817 01:52:06.853958 1554672 retry.go:31] will retry after 291.140013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0817 01:52:06.853773 1554672 addons.go:313] Verifying addon gcp-auth=true in "addons-20210817015042-1554185"
	I0817 01:52:06.856170 1554672 out.go:177] * Verifying gcp-auth addon...
	I0817 01:52:06.858037 1554672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0817 01:52:06.879437 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:06.901505 1554672 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0817 01:52:06.901521 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:06.902116 1554672 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0817 01:52:06.902127 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:07.114493 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:07.145764 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 01:52:07.214608 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0817 01:52:07.318415 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.729707072s)
	I0817 01:52:07.318482 1554672 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210817015042-1554185"
	I0817 01:52:07.320343 1554672 out.go:177] * Verifying csi-hostpath-driver addon...
	I0817 01:52:07.322240 1554672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0817 01:52:07.329026 1554672 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0817 01:52:07.329072 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:07.329707 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:07.406785 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:07.407051 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:07.829299 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:07.833611 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:07.905862 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:07.906518 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:08.243240 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.097293811s)
	I0817 01:52:08.329812 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:08.338224 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:08.405779 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:08.407978 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:08.530852 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (1.316180136s)
	I0817 01:52:08.829006 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:08.834034 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:08.905993 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:08.906433 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:09.328255 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:09.333785 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:09.405657 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:09.405914 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:09.613886 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:09.829205 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:09.832931 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:09.905035 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:09.905962 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:10.328643 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:10.333042 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:10.404941 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:10.405901 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:10.829248 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:10.833275 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:10.905773 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:10.906291 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:11.328954 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:11.333012 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:11.409301 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:11.410066 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:11.614143 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:11.828872 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:11.833797 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:11.904929 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:11.905665 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:12.328367 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:12.333086 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:12.405384 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:12.405823 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:12.829376 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:12.833255 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:12.905024 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:12.905295 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:13.330689 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:13.338216 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:13.404972 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:13.406085 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:13.829177 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:13.832929 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:13.904662 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:13.905242 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:14.113342 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:14.328450 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:14.333455 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:14.404940 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:14.405321 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:14.827779 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:14.832993 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:14.905252 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:14.905259 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:15.328264 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:15.332934 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:15.404658 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:15.405224 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:15.828486 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:15.833605 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:15.904727 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:15.905383 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:16.328197 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:16.332914 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:16.405192 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:16.405977 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:16.613508 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:16.828234 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:16.833383 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:16.904446 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:16.905357 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:17.327749 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:17.337646 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:17.404755 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:17.405248 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:17.827645 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:17.832968 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:17.904120 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:17.905322 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:18.328032 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:18.332346 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:18.405262 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:18.405850 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:18.828047 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:18.833667 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:18.906070 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:18.906612 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:19.112711 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:19.327808 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:19.332949 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:19.404756 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:19.404964 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:19.828001 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:19.833437 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:19.904449 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:19.904977 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:20.327656 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:20.333295 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:20.404715 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:20.405667 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:20.828390 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:20.833214 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:20.905458 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:20.905671 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:21.328312 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:21.333138 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:21.405037 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:21.406170 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:21.612764 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:21.944477 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:21.946682 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:21.947605 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:21.947754 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:22.328433 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:22.333541 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:22.404285 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:22.405669 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:22.827511 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:22.833159 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:22.905254 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:22.905581 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:23.328750 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:23.333436 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:23.404313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:23.405077 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:23.613578 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:23.828253 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:23.832694 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:23.904993 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:23.905761 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:24.328880 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:24.333313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:24.404520 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:24.404733 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:24.828322 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:24.833601 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:24.905217 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:24.905274 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:25.330911 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:25.337306 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:25.404857 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:25.405921 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:25.832639 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:25.835193 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:25.905020 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:25.905738 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:26.112693 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:26.327937 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:26.333091 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:26.405361 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:26.405698 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:26.828674 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:26.833006 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:26.905177 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:26.906093 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:27.337144 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:27.338231 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:27.406265 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:27.406265 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:27.828866 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:27.833010 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:27.904570 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:27.905457 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:28.112963 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:28.328408 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:28.333808 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:28.404888 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:28.405625 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:28.828928 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:28.833221 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:28.905969 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:28.906240 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:29.330142 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:29.334291 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:29.404551 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:29.405831 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:29.837438 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:29.838402 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:29.905810 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:29.905987 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:30.113285 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:30.328348 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:30.332925 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:30.405080 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:30.405351 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:30.828025 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:30.832792 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:30.905180 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:30.905627 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:31.328284 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:31.333115 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:31.405329 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:31.406706 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:31.828629 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:31.833620 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:31.908890 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:31.911347 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:32.113408 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:32.328824 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:32.333028 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:32.404805 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:32.405808 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:32.829223 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:32.833077 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:32.905936 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:32.906733 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:33.113600 1554672 node_ready.go:49] node "addons-20210817015042-1554185" has status "Ready":"True"
	I0817 01:52:33.113625 1554672 node_ready.go:38] duration metric: took 34.525011363s waiting for node "addons-20210817015042-1554185" to be "Ready" ...
	I0817 01:52:33.113634 1554672 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 01:52:33.122258 1554672 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:33.328105 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:33.333112 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:33.405131 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:33.406483 1554672 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0817 01:52:33.406499 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:33.828753 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:33.833308 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:33.905785 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:33.906578 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:34.328900 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:34.333293 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:34.405074 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:34.405422 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:34.829036 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:34.844069 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:34.907082 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:34.907261 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:35.133323 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:35.329658 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:35.340946 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:35.406005 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:35.406344 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:35.828964 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:35.836081 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:35.905164 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:35.905926 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:36.328635 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:36.333208 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:36.406327 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:36.406693 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:36.828912 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:36.834669 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:36.906233 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:36.907548 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:37.328276 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:37.333065 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:37.443517 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:37.443853 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:37.633378 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:37.829201 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:37.833434 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:37.906518 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:37.906857 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:38.329240 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:38.333317 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:38.408662 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:38.409011 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:38.828315 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:38.837240 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:38.904802 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:38.906525 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:39.329255 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:39.346113 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:39.413436 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:39.413760 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:39.634418 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:39.828371 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:39.833885 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:39.905904 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:39.906262 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:40.328884 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:40.333598 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:40.405309 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:40.407193 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:40.828938 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:40.833697 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:40.905855 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:40.906245 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:41.329054 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:41.334180 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:41.404918 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:41.406327 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:41.828350 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:41.833549 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:41.905158 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:41.905842 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:42.193681 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:42.328599 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:42.335515 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:42.405022 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:42.405819 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:42.828942 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:42.833740 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:42.905762 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:42.905954 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:43.328334 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:43.333885 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:43.415938 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:43.416337 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:43.829129 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:43.839083 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:43.905165 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:43.905905 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:44.328646 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:44.333389 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:44.404851 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:44.406163 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:44.634366 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:52:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:44.828620 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:44.833682 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:44.905712 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:44.909482 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:45.328143 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:45.332944 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:45.406611 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:45.407338 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:45.828634 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:45.832978 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:45.904648 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:45.905363 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:46.328711 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:46.333910 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:46.405839 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:46.406763 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:46.635221 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:52:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:46.828340 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:46.833989 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:46.905252 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:46.906215 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:47.328332 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:47.333832 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:47.405675 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:47.407973 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:47.827969 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:47.833906 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:47.907574 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:47.912357 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:48.328701 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:48.333127 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:48.406135 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:48.406524 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:48.636787 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:48.828308 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:48.833330 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:48.906467 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:48.906683 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:49.328422 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:49.333598 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:49.405055 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:49.405237 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:49.828698 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:49.833563 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:49.905439 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:49.905671 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:50.329000 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:50.335089 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:50.406085 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:50.407525 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:50.829299 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:50.833324 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:50.906506 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:50.906956 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:51.134950 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:51.327885 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:51.333409 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:51.405287 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:51.406140 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:51.829287 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:51.834079 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:51.905595 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:51.906917 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:52.136668 1554672 pod_ready.go:92] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.136691 1554672 pod_ready.go:81] duration metric: took 19.014386562s waiting for pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.136717 1554672 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.140525 1554672 pod_ready.go:92] pod "etcd-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.140545 1554672 pod_ready.go:81] duration metric: took 3.820392ms waiting for pod "etcd-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.140557 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.144374 1554672 pod_ready.go:92] pod "kube-apiserver-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.144391 1554672 pod_ready.go:81] duration metric: took 3.805ms waiting for pod "kube-apiserver-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.144400 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.147997 1554672 pod_ready.go:92] pod "kube-controller-manager-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.148018 1554672 pod_ready.go:81] duration metric: took 3.596018ms waiting for pod "kube-controller-manager-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.148027 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-88pjl" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.151630 1554672 pod_ready.go:92] pod "kube-proxy-88pjl" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.151645 1554672 pod_ready.go:81] duration metric: took 3.612895ms waiting for pod "kube-proxy-88pjl" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.151654 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.328964 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:52.333708 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:52.405187 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:52.406370 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:52.533532 1554672 pod_ready.go:92] pod "kube-scheduler-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.533558 1554672 pod_ready.go:81] duration metric: took 381.895022ms waiting for pod "kube-scheduler-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.533568 1554672 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.829155 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:52.839844 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:52.905885 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:52.906272 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:53.346344 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:53.352796 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:53.409937 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:53.410482 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:53.834056 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:53.834773 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:53.907214 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:53.907172 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:54.331399 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:54.336335 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:54.407048 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:54.410847 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:54.829058 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:54.833883 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:54.905684 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:54.906829 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:54.944019 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:55.328849 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:55.333455 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:55.406435 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:55.408050 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:55.834184 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:55.836250 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:55.907784 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:55.908229 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:56.340402 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:56.341855 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:56.405913 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:56.406308 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:56.829718 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:56.840586 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:56.908288 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:56.908568 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:56.948818 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:57.328503 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:57.334462 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:57.406776 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:57.407190 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:57.828588 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:57.833847 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:57.905081 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:57.906429 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:58.329593 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:58.335086 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:58.405528 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:58.406555 1554672 kapi.go:108] duration metric: took 51.552974836s to wait for kubernetes.io/minikube-addons=registry ...
	I0817 01:52:58.829266 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:58.833517 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:58.905974 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:59.342609 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:59.348252 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:59.405313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:59.444841 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:59.828685 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:59.833928 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:59.905309 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:00.328962 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:00.333845 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:00.405313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:00.829039 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:00.834166 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:00.904823 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:01.328747 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:01.334336 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:01.404643 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:01.829758 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:01.835420 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:01.905318 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:01.945948 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:53:02.376424 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:02.377873 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:02.404990 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:02.828812 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:02.833383 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:02.904641 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:03.329032 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:03.337245 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:03.406764 1554672 kapi.go:108] duration metric: took 56.548723137s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0817 01:53:03.408669 1554672 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-20210817015042-1554185 cluster.
	I0817 01:53:03.410521 1554672 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0817 01:53:03.412326 1554672 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0817 01:53:03.448173 1554672 pod_ready.go:92] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"True"
	I0817 01:53:03.448196 1554672 pod_ready.go:81] duration metric: took 10.914620384s waiting for pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace to be "Ready" ...
	I0817 01:53:03.448215 1554672 pod_ready.go:38] duration metric: took 30.334547327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 01:53:03.448235 1554672 api_server.go:50] waiting for apiserver process to appear ...
	I0817 01:53:03.448250 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 01:53:03.448304 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 01:53:03.564171 1554672 cri.go:76] found id: "eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:03.564232 1554672 cri.go:76] found id: ""
	I0817 01:53:03.564250 1554672 logs.go:270] 1 containers: [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8]
	I0817 01:53:03.564343 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.575403 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 01:53:03.575484 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 01:53:03.604432 1554672 cri.go:76] found id: "29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:03.604494 1554672 cri.go:76] found id: ""
	I0817 01:53:03.604513 1554672 logs.go:270] 1 containers: [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4]
	I0817 01:53:03.604561 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.607149 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 01:53:03.607215 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 01:53:03.632895 1554672 cri.go:76] found id: "13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:03.632908 1554672 cri.go:76] found id: ""
	I0817 01:53:03.632913 1554672 logs.go:270] 1 containers: [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012]
	I0817 01:53:03.632967 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.635372 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 01:53:03.635435 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 01:53:03.664635 1554672 cri.go:76] found id: "615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:03.664650 1554672 cri.go:76] found id: ""
	I0817 01:53:03.664655 1554672 logs.go:270] 1 containers: [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13]
	I0817 01:53:03.664689 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.667197 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 01:53:03.667270 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 01:53:03.691527 1554672 cri.go:76] found id: "0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:03.691545 1554672 cri.go:76] found id: ""
	I0817 01:53:03.691550 1554672 logs.go:270] 1 containers: [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c]
	I0817 01:53:03.691582 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.693995 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 01:53:03.694060 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 01:53:03.717435 1554672 cri.go:76] found id: ""
	I0817 01:53:03.717475 1554672 logs.go:270] 0 containers: []
	W0817 01:53:03.717489 1554672 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 01:53:03.717495 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 01:53:03.717533 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 01:53:03.741717 1554672 cri.go:76] found id: "783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:03.741734 1554672 cri.go:76] found id: ""
	I0817 01:53:03.741739 1554672 logs.go:270] 1 containers: [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892]
	I0817 01:53:03.741798 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.744804 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 01:53:03.744851 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 01:53:03.771775 1554672 cri.go:76] found id: "52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:03.771789 1554672 cri.go:76] found id: ""
	I0817 01:53:03.771794 1554672 logs.go:270] 1 containers: [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393]
	I0817 01:53:03.771831 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.774470 1554672 logs.go:123] Gathering logs for etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] ...
	I0817 01:53:03.774489 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:03.801776 1554672 logs.go:123] Gathering logs for coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] ...
	I0817 01:53:03.801798 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:03.837058 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:03.840579 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:03.843933 1554672 logs.go:123] Gathering logs for kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] ...
	I0817 01:53:03.843957 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:03.898510 1554672 logs.go:123] Gathering logs for kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] ...
	I0817 01:53:03.898538 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:03.952593 1554672 logs.go:123] Gathering logs for kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] ...
	I0817 01:53:03.952621 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:04.082990 1554672 logs.go:123] Gathering logs for containerd ...
	I0817 01:53:04.083052 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0817 01:53:04.223853 1554672 logs.go:123] Gathering logs for kubelet ...
	I0817 01:53:04.223887 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 01:53:04.331965 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:04.338761 1554672 logs.go:123] Gathering logs for dmesg ...
	I0817 01:53:04.340534 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 01:53:04.342392 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:04.357212 1554672 logs.go:123] Gathering logs for describe nodes ...
	I0817 01:53:04.357263 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 01:53:04.694598 1554672 logs.go:123] Gathering logs for kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] ...
	I0817 01:53:04.694717 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:04.828761 1554672 logs.go:123] Gathering logs for storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] ...
	I0817 01:53:04.828816 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:04.851348 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:04.852551 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:04.876644 1554672 logs.go:123] Gathering logs for container status ...
	I0817 01:53:04.876688 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 01:53:05.331362 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:05.343522 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:05.831960 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:05.841720 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:06.329544 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:06.334286 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:06.829369 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:06.833923 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:07.328774 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:07.334115 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:07.467368 1554672 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 01:53:07.487646 1554672 api_server.go:70] duration metric: took 1m9.278576044s to wait for apiserver process to appear ...
	I0817 01:53:07.487700 1554672 api_server.go:86] waiting for apiserver healthz status ...
	I0817 01:53:07.487733 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 01:53:07.487806 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 01:53:07.534592 1554672 cri.go:76] found id: "eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:07.534644 1554672 cri.go:76] found id: ""
	I0817 01:53:07.534661 1554672 logs.go:270] 1 containers: [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8]
	I0817 01:53:07.534726 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.538672 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 01:53:07.538745 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 01:53:07.572611 1554672 cri.go:76] found id: "29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:07.572657 1554672 cri.go:76] found id: ""
	I0817 01:53:07.572674 1554672 logs.go:270] 1 containers: [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4]
	I0817 01:53:07.572739 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.576722 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 01:53:07.576801 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 01:53:07.611541 1554672 cri.go:76] found id: "13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:07.611559 1554672 cri.go:76] found id: ""
	I0817 01:53:07.611564 1554672 logs.go:270] 1 containers: [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012]
	I0817 01:53:07.611627 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.614311 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 01:53:07.614389 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 01:53:07.641823 1554672 cri.go:76] found id: "615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:07.641859 1554672 cri.go:76] found id: ""
	I0817 01:53:07.641864 1554672 logs.go:270] 1 containers: [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13]
	I0817 01:53:07.641897 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.644712 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 01:53:07.644770 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 01:53:07.667773 1554672 cri.go:76] found id: "0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:07.667788 1554672 cri.go:76] found id: ""
	I0817 01:53:07.667793 1554672 logs.go:270] 1 containers: [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c]
	I0817 01:53:07.667831 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.670409 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 01:53:07.670478 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 01:53:07.695746 1554672 cri.go:76] found id: ""
	I0817 01:53:07.695763 1554672 logs.go:270] 0 containers: []
	W0817 01:53:07.695768 1554672 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 01:53:07.695784 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 01:53:07.695828 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 01:53:07.727549 1554672 cri.go:76] found id: "783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:07.727592 1554672 cri.go:76] found id: ""
	I0817 01:53:07.727608 1554672 logs.go:270] 1 containers: [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892]
	I0817 01:53:07.727672 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.731096 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 01:53:07.731168 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 01:53:07.758719 1554672 cri.go:76] found id: "52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:07.758734 1554672 cri.go:76] found id: ""
	I0817 01:53:07.758739 1554672 logs.go:270] 1 containers: [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393]
	I0817 01:53:07.758787 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.761946 1554672 logs.go:123] Gathering logs for kubelet ...
	I0817 01:53:07.761964 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 01:53:07.830586 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:07.834021 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:07.863604 1554672 logs.go:123] Gathering logs for coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] ...
	I0817 01:53:07.863626 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:07.887301 1554672 logs.go:123] Gathering logs for kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] ...
	I0817 01:53:07.887356 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:07.918171 1554672 logs.go:123] Gathering logs for containerd ...
	I0817 01:53:07.918195 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0817 01:53:08.012682 1554672 logs.go:123] Gathering logs for container status ...
	I0817 01:53:08.012712 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 01:53:08.059071 1554672 logs.go:123] Gathering logs for kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] ...
	I0817 01:53:08.059126 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:08.163276 1554672 logs.go:123] Gathering logs for dmesg ...
	I0817 01:53:08.163302 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 01:53:08.176772 1554672 logs.go:123] Gathering logs for describe nodes ...
	I0817 01:53:08.176790 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 01:53:08.330227 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:08.344515 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:08.425430 1554672 logs.go:123] Gathering logs for kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] ...
	I0817 01:53:08.425453 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:08.486450 1554672 logs.go:123] Gathering logs for etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] ...
	I0817 01:53:08.486475 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:08.515454 1554672 logs.go:123] Gathering logs for kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] ...
	I0817 01:53:08.515475 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:08.542038 1554672 logs.go:123] Gathering logs for storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] ...
	I0817 01:53:08.542057 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:08.828741 1554672 kapi.go:108] duration metric: took 1m7.524156223s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0817 01:53:08.834977 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:09.335143 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:09.835186 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:10.335936 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:10.834892 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:11.068088 1554672 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 01:53:11.076771 1554672 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 01:53:11.077605 1554672 api_server.go:139] control plane version: v1.21.3
	I0817 01:53:11.077645 1554672 api_server.go:129] duration metric: took 3.589928004s to wait for apiserver health ...
	I0817 01:53:11.077667 1554672 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 01:53:11.077694 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 01:53:11.077770 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 01:53:11.134012 1554672 cri.go:76] found id: "eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:11.134030 1554672 cri.go:76] found id: ""
	I0817 01:53:11.134035 1554672 logs.go:270] 1 containers: [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8]
	I0817 01:53:11.134081 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.136813 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 01:53:11.136882 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 01:53:11.158746 1554672 cri.go:76] found id: "29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:11.158763 1554672 cri.go:76] found id: ""
	I0817 01:53:11.158768 1554672 logs.go:270] 1 containers: [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4]
	I0817 01:53:11.158868 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.161890 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 01:53:11.161955 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 01:53:11.185618 1554672 cri.go:76] found id: "13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:11.185638 1554672 cri.go:76] found id: ""
	I0817 01:53:11.185643 1554672 logs.go:270] 1 containers: [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012]
	I0817 01:53:11.185698 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.188273 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 01:53:11.188341 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 01:53:11.212061 1554672 cri.go:76] found id: "615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:11.212084 1554672 cri.go:76] found id: ""
	I0817 01:53:11.212104 1554672 logs.go:270] 1 containers: [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13]
	I0817 01:53:11.212154 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.214710 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 01:53:11.214777 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 01:53:11.254063 1554672 cri.go:76] found id: "0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:11.254080 1554672 cri.go:76] found id: ""
	I0817 01:53:11.254086 1554672 logs.go:270] 1 containers: [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c]
	I0817 01:53:11.254150 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.257322 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 01:53:11.257386 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 01:53:11.280677 1554672 cri.go:76] found id: ""
	I0817 01:53:11.280719 1554672 logs.go:270] 0 containers: []
	W0817 01:53:11.280735 1554672 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 01:53:11.280749 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 01:53:11.280792 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 01:53:11.302301 1554672 cri.go:76] found id: "783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:11.302344 1554672 cri.go:76] found id: ""
	I0817 01:53:11.302359 1554672 logs.go:270] 1 containers: [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892]
	I0817 01:53:11.302405 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.305069 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 01:53:11.305128 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 01:53:11.334791 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:11.337025 1554672 cri.go:76] found id: "52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:11.337041 1554672 cri.go:76] found id: ""
	I0817 01:53:11.337046 1554672 logs.go:270] 1 containers: [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393]
	I0817 01:53:11.337097 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.340390 1554672 logs.go:123] Gathering logs for kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] ...
	I0817 01:53:11.340407 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:11.377298 1554672 logs.go:123] Gathering logs for storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] ...
	I0817 01:53:11.377344 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:11.408451 1554672 logs.go:123] Gathering logs for containerd ...
	I0817 01:53:11.408473 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0817 01:53:11.514559 1554672 logs.go:123] Gathering logs for container status ...
	I0817 01:53:11.514589 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 01:53:11.567396 1554672 logs.go:123] Gathering logs for kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] ...
	I0817 01:53:11.567423 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:11.625821 1554672 logs.go:123] Gathering logs for etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] ...
	I0817 01:53:11.625847 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:11.652282 1554672 logs.go:123] Gathering logs for coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] ...
	I0817 01:53:11.652306 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:11.675002 1554672 logs.go:123] Gathering logs for kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] ...
	I0817 01:53:11.675047 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:11.697704 1554672 logs.go:123] Gathering logs for kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] ...
	I0817 01:53:11.697724 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:11.745590 1554672 logs.go:123] Gathering logs for kubelet ...
	I0817 01:53:11.745611 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 01:53:11.836311 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:11.837956 1554672 logs.go:123] Gathering logs for dmesg ...
	I0817 01:53:11.837993 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 01:53:11.865409 1554672 logs.go:123] Gathering logs for describe nodes ...
	I0817 01:53:11.865430 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 01:53:12.335417 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:12.834938 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:13.335078 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:13.835219 1554672 kapi.go:108] duration metric: took 1m6.512977174s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0817 01:53:13.838858 1554672 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, volumesnapshots, olm, registry, gcp-auth, ingress, csi-hostpath-driver
	I0817 01:53:13.838918 1554672 addons.go:344] enableAddons completed in 1m15.630038865s
	I0817 01:53:14.513128 1554672 system_pods.go:59] 18 kube-system pods found
	I0817 01:53:14.513163 1554672 system_pods.go:61] "coredns-558bd4d5db-sxct6" [954e41e3-b5a4-4fa8-8926-fa9f53507414] Running
	I0817 01:53:14.513169 1554672 system_pods.go:61] "csi-hostpath-attacher-0" [ae96aa4a-3965-45a9-8e94-db9fee5bcae1] Running
	I0817 01:53:14.513174 1554672 system_pods.go:61] "csi-hostpath-provisioner-0" [e6cedc35-32e7-4be9-907a-063eaa25f07d] Running
	I0817 01:53:14.513178 1554672 system_pods.go:61] "csi-hostpath-resizer-0" [9c8b4283-1ee1-45a6-ace0-e0096867592a] Running
	I0817 01:53:14.513183 1554672 system_pods.go:61] "csi-hostpath-snapshotter-0" [d23b3d96-e125-4669-a694-40c25a9ca2bc] Running
	I0817 01:53:14.513189 1554672 system_pods.go:61] "csi-hostpathplugin-0" [50373dc3-be79-4049-b2f4-e19bb0a79c10] Running
	I0817 01:53:14.513193 1554672 system_pods.go:61] "etcd-addons-20210817015042-1554185" [b0a759e2-33c6-486b-aee8-e1019669fb12] Running
	I0817 01:53:14.513200 1554672 system_pods.go:61] "kindnet-xp2kn" [234e19f8-3cdd-4c44-9dff-290f932bba79] Running
	I0817 01:53:14.513205 1554672 system_pods.go:61] "kube-apiserver-addons-20210817015042-1554185" [3704ee0c-53da-4106-b407-9c6829a74921] Running
	I0817 01:53:14.513215 1554672 system_pods.go:61] "kube-controller-manager-addons-20210817015042-1554185" [a7e9992d-6083-4141-a778-7ab31067cb40] Running
	I0817 01:53:14.513220 1554672 system_pods.go:61] "kube-proxy-88pjl" [3152779f-8eaa-4982-8a07-a39f7c215086] Running
	I0817 01:53:14.513225 1554672 system_pods.go:61] "kube-scheduler-addons-20210817015042-1554185" [3e4bab6f-a0a1-46e8-83dd-f7b11f4e9d62] Running
	I0817 01:53:14.513229 1554672 system_pods.go:61] "metrics-server-77c99ccb96-x8mh4" [634199c2-0567-4fb7-b5d6-2205aa4fcfd4] Running
	I0817 01:53:14.513238 1554672 system_pods.go:61] "registry-9np4b" [1b228b1c-c9df-4c36-a0f4-2a2fb6ec967b] Running
	I0817 01:53:14.513247 1554672 system_pods.go:61] "registry-proxy-p5xh8" [0a7638e5-9e17-4626-aeb9-b7fe2abe695d] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 01:53:14.513257 1554672 system_pods.go:61] "snapshot-controller-989f9ddc8-rcswn" [0d6290bd-2e2d-4e14-b299-f4b14ea2de3b] Running
	I0817 01:53:14.513264 1554672 system_pods.go:61] "snapshot-controller-989f9ddc8-zqgfr" [7ceff31f-ed10-44dd-8c0f-7063e87beadc] Running
	I0817 01:53:14.513274 1554672 system_pods.go:61] "storage-provisioner" [3f4cb2a6-c88b-486f-bba7-cef64ca39e9a] Running
	I0817 01:53:14.513279 1554672 system_pods.go:74] duration metric: took 3.43559739s to wait for pod list to return data ...
	I0817 01:53:14.513290 1554672 default_sa.go:34] waiting for default service account to be created ...
	I0817 01:53:14.515707 1554672 default_sa.go:45] found service account: "default"
	I0817 01:53:14.515727 1554672 default_sa.go:55] duration metric: took 2.432583ms for default service account to be created ...
	I0817 01:53:14.515734 1554672 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 01:53:14.523274 1554672 system_pods.go:86] 18 kube-system pods found
	I0817 01:53:14.523301 1554672 system_pods.go:89] "coredns-558bd4d5db-sxct6" [954e41e3-b5a4-4fa8-8926-fa9f53507414] Running
	I0817 01:53:14.523309 1554672 system_pods.go:89] "csi-hostpath-attacher-0" [ae96aa4a-3965-45a9-8e94-db9fee5bcae1] Running
	I0817 01:53:14.523314 1554672 system_pods.go:89] "csi-hostpath-provisioner-0" [e6cedc35-32e7-4be9-907a-063eaa25f07d] Running
	I0817 01:53:14.523324 1554672 system_pods.go:89] "csi-hostpath-resizer-0" [9c8b4283-1ee1-45a6-ace0-e0096867592a] Running
	I0817 01:53:14.523332 1554672 system_pods.go:89] "csi-hostpath-snapshotter-0" [d23b3d96-e125-4669-a694-40c25a9ca2bc] Running
	I0817 01:53:14.523338 1554672 system_pods.go:89] "csi-hostpathplugin-0" [50373dc3-be79-4049-b2f4-e19bb0a79c10] Running
	I0817 01:53:14.523346 1554672 system_pods.go:89] "etcd-addons-20210817015042-1554185" [b0a759e2-33c6-486b-aee8-e1019669fb12] Running
	I0817 01:53:14.523351 1554672 system_pods.go:89] "kindnet-xp2kn" [234e19f8-3cdd-4c44-9dff-290f932bba79] Running
	I0817 01:53:14.523364 1554672 system_pods.go:89] "kube-apiserver-addons-20210817015042-1554185" [3704ee0c-53da-4106-b407-9c6829a74921] Running
	I0817 01:53:14.523369 1554672 system_pods.go:89] "kube-controller-manager-addons-20210817015042-1554185" [a7e9992d-6083-4141-a778-7ab31067cb40] Running
	I0817 01:53:14.523377 1554672 system_pods.go:89] "kube-proxy-88pjl" [3152779f-8eaa-4982-8a07-a39f7c215086] Running
	I0817 01:53:14.523382 1554672 system_pods.go:89] "kube-scheduler-addons-20210817015042-1554185" [3e4bab6f-a0a1-46e8-83dd-f7b11f4e9d62] Running
	I0817 01:53:14.523391 1554672 system_pods.go:89] "metrics-server-77c99ccb96-x8mh4" [634199c2-0567-4fb7-b5d6-2205aa4fcfd4] Running
	I0817 01:53:14.523396 1554672 system_pods.go:89] "registry-9np4b" [1b228b1c-c9df-4c36-a0f4-2a2fb6ec967b] Running
	I0817 01:53:14.523405 1554672 system_pods.go:89] "registry-proxy-p5xh8" [0a7638e5-9e17-4626-aeb9-b7fe2abe695d] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 01:53:14.523414 1554672 system_pods.go:89] "snapshot-controller-989f9ddc8-rcswn" [0d6290bd-2e2d-4e14-b299-f4b14ea2de3b] Running
	I0817 01:53:14.523429 1554672 system_pods.go:89] "snapshot-controller-989f9ddc8-zqgfr" [7ceff31f-ed10-44dd-8c0f-7063e87beadc] Running
	I0817 01:53:14.523434 1554672 system_pods.go:89] "storage-provisioner" [3f4cb2a6-c88b-486f-bba7-cef64ca39e9a] Running
	I0817 01:53:14.523439 1554672 system_pods.go:126] duration metric: took 7.700756ms to wait for k8s-apps to be running ...
	I0817 01:53:14.523449 1554672 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 01:53:14.523496 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 01:53:14.532286 1554672 system_svc.go:56] duration metric: took 8.834069ms WaitForService to wait for kubelet.
	I0817 01:53:14.532341 1554672 kubeadm.go:547] duration metric: took 1m16.323273553s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 01:53:14.532368 1554672 node_conditions.go:102] verifying NodePressure condition ...
	I0817 01:53:14.535572 1554672 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 01:53:14.535600 1554672 node_conditions.go:123] node cpu capacity is 2
	I0817 01:53:14.535613 1554672 node_conditions.go:105] duration metric: took 3.24014ms to run NodePressure ...
	I0817 01:53:14.535627 1554672 start.go:231] waiting for startup goroutines ...
	I0817 01:53:14.849964 1554672 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 01:53:14.851936 1554672 out.go:177] * Done! kubectl is now configured to use "addons-20210817015042-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID
	abba6af8e7f33       d544402579747       About a minute ago   Exited              catalog-operator                         5                   d7bf81a0ac291
	31c3ce405e0a9       60dc18151daf8       About a minute ago   Exited              registry-proxy                           5                   a201c543f56f4
	44d2639884aff       d544402579747       About a minute ago   Exited              olm-operator                             5                   3745d022afa82
	79b64e9292026       ab63026e5f864       3 minutes ago        Running             liveness-probe                           0                   e1b67cc269ffc
	86d072c7d6f7f       f8f69c8b53974       4 minutes ago        Running             hostpath                                 0                   e1b67cc269ffc
	95f12ea0ee9f4       1f46a863d2aa9       4 minutes ago        Running             node-driver-registrar                    0                   e1b67cc269ffc
	9803a3ca0028f       bac9ddccb0c70       4 minutes ago        Running             controller                               0                   8a5c5f5789f9b
	793ba9141ea00       ff9e753cbb985       4 minutes ago        Running             gcp-auth                                 0                   6e90e0ba1ce49
	d76b5b43f143e       b4df90000e547       4 minutes ago        Running             csi-external-health-monitor-controller   0                   e1b67cc269ffc
	33a3dc8565cfc       69724f415cab8       4 minutes ago        Running             csi-attacher                             0                   0ae516c846f1f
	7cf8bb6cdcfe2       a883f7fc35610       4 minutes ago        Exited              patch                                    0                   8703f481d86b7
	9b6f17789808c       622522dfd285b       4 minutes ago        Exited              patch                                    0                   f0f05bf84660a
	b95b30bccb5bc       622522dfd285b       4 minutes ago        Exited              create                                   0                   c46e05dac17ea
	b2165d1abb5e5       a883f7fc35610       4 minutes ago        Exited              create                                   0                   95b081ab37530
	eb2360810df1a       e3597035e9357       4 minutes ago        Running             metrics-server                           0                   42d312091cf20
	c3eb735c4bd3e       d65cad97e5f05       4 minutes ago        Running             csi-snapshotter                          0                   558825437a764
	3af33a1255a45       03c15ec36e257       4 minutes ago        Running             csi-provisioner                          0                   cb89551723f57
	2b02be61418e6       63f120615f44b       4 minutes ago        Running             csi-external-health-monitor-agent        0                   e1b67cc269ffc
	1b06e793319cd       3758cfc26c6db       4 minutes ago        Running             volume-snapshot-controller               0                   c81fe44186720
	ecf9efd7a3f01       803606888e0b1       4 minutes ago        Running             csi-resizer                              0                   f99c5fda234ab
	783c0958684bd       ba04bb24b9575       4 minutes ago        Running             storage-provisioner                      0                   0cde084873a62
	13be13e3410ac       1a1f05a2cd7c2       4 minutes ago        Running             coredns                                  0                   00cb17ddd7f4a
	6fe738b9a8dba       3758cfc26c6db       4 minutes ago        Running             volume-snapshot-controller               0                   d0b05273cbb65
	7b33a9bf5802e       f37b7c809e5dc       5 minutes ago        Running             kindnet-cni                              0                   96dbe7c3048af
	0483eb703ed0f       4ea38350a1beb       5 minutes ago        Running             kube-proxy                               0                   f0918af3dc71f
	eacccd844ca10       44a6d50ef170d       5 minutes ago        Running             kube-apiserver                           0                   a18344960e958
	615d16acf0dc7       31a3b96cefc1e       5 minutes ago        Running             kube-scheduler                           0                   99c49ff38f4e8
	29af4eb3039bc       05b738aa1bc63       5 minutes ago        Running             etcd                                     0                   c6d8e2c4d15ca
	52a4c60d098e5       cb310ff289d79       5 minutes ago        Running             kube-controller-manager                  0                   437a86afaf37b
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 01:50:50 UTC, end at Tue 2021-08-17 01:57:11 UTC. --
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.307387930Z" level=info msg="TaskExit event &TaskExit{ContainerID:450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532,ID:450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532,Pid:4863,ExitStatus:2,ExitedAt:2021-08-17 01:57:10.307057989 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.357731582Z" level=info msg="shim disconnected" id=450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.357890998Z" level=error msg="copy shim log" error="read /proc/self/fd/119: file already closed"
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.360784458Z" level=info msg="StopContainer for \"450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532\" returns successfully"
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.361167035Z" level=info msg="StopPodSandbox for \"f6b03a5143e3857267ca55991d2f321058d62f407efef8b4c29b83405a4667bf\""
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.361234899Z" level=info msg="Container to stop \"450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.407647529Z" level=info msg="TaskExit event &TaskExit{ContainerID:f6b03a5143e3857267ca55991d2f321058d62f407efef8b4c29b83405a4667bf,ID:f6b03a5143e3857267ca55991d2f321058d62f407efef8b4c29b83405a4667bf,Pid:3874,ExitStatus:137,ExitedAt:2021-08-17 01:57:10.407179587 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.443803360Z" level=info msg="shim disconnected" id=f6b03a5143e3857267ca55991d2f321058d62f407efef8b4c29b83405a4667bf
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.443849398Z" level=error msg="copy shim log" error="read /proc/self/fd/211: file already closed"
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.470777367Z" level=info msg="TearDown network for sandbox \"f6b03a5143e3857267ca55991d2f321058d62f407efef8b4c29b83405a4667bf\" successfully"
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.470822355Z" level=info msg="StopPodSandbox for \"f6b03a5143e3857267ca55991d2f321058d62f407efef8b4c29b83405a4667bf\" returns successfully"
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.901356686Z" level=info msg="StopPodSandbox for \"a201c543f56f459cda4d51a98cde48b1cf01a805684a51b36baf0d13daae324e\""
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.901427175Z" level=info msg="Container to stop \"31c3ce405e0a9b68b92a5061ea1126abafef211b52df1fd251335a0426e88c7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.914077354Z" level=info msg="TaskExit event &TaskExit{ContainerID:a201c543f56f459cda4d51a98cde48b1cf01a805684a51b36baf0d13daae324e,ID:a201c543f56f459cda4d51a98cde48b1cf01a805684a51b36baf0d13daae324e,Pid:2228,ExitStatus:137,ExitedAt:2021-08-17 01:57:10.913934496 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.963117667Z" level=info msg="StopPodSandbox for \"f6b03a5143e3857267ca55991d2f321058d62f407efef8b4c29b83405a4667bf\""
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.963176260Z" level=info msg="Container to stop \"450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.972726597Z" level=info msg="shim disconnected" id=a201c543f56f459cda4d51a98cde48b1cf01a805684a51b36baf0d13daae324e
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.972857837Z" level=error msg="copy shim log" error="read /proc/self/fd/91: file already closed"
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.984218002Z" level=info msg="RemoveContainer for \"450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532\""
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.990223062Z" level=info msg="RemoveContainer for \"450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532\" returns successfully"
	Aug 17 01:57:10 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:10.993217657Z" level=error msg="ContainerStatus for \"450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532\": not found"
	Aug 17 01:57:11 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:11.001364579Z" level=info msg="TearDown network for sandbox \"f6b03a5143e3857267ca55991d2f321058d62f407efef8b4c29b83405a4667bf\" successfully"
	Aug 17 01:57:11 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:11.001473845Z" level=info msg="StopPodSandbox for \"f6b03a5143e3857267ca55991d2f321058d62f407efef8b4c29b83405a4667bf\" returns successfully"
	Aug 17 01:57:11 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:11.057967368Z" level=info msg="TearDown network for sandbox \"a201c543f56f459cda4d51a98cde48b1cf01a805684a51b36baf0d13daae324e\" successfully"
	Aug 17 01:57:11 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T01:57:11.058176244Z" level=info msg="StopPodSandbox for \"a201c543f56f459cda4d51a98cde48b1cf01a805684a51b36baf0d13daae324e\" returns successfully"
	
	* 
	* ==> coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210817015042-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-20210817015042-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=addons-20210817015042-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T01_51_45_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210817015042-1554185
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210817015042-1554185"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 01:51:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210817015042-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 01:57:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 01:53:23 +0000   Tue, 17 Aug 2021 01:51:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 01:53:23 +0000   Tue, 17 Aug 2021 01:51:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 01:53:23 +0000   Tue, 17 Aug 2021 01:51:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 01:53:23 +0000   Tue, 17 Aug 2021 01:52:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210817015042-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                ace180e0-70a7-4178-bffd-233be0529698
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-5954cc4898-5ssnv                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  ingress-nginx               ingress-nginx-controller-59b45fb494-d8wsj                100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         5m10s
	  kube-system                 coredns-558bd4d5db-sxct6                                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m14s
	  kube-system                 csi-hostpath-attacher-0                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 csi-hostpath-provisioner-0                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 csi-hostpath-resizer-0                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 csi-hostpath-snapshotter-0                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 csi-hostpathplugin-0                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 etcd-addons-20210817015042-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m27s
	  kube-system                 kindnet-xp2kn                                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m14s
	  kube-system                 kube-apiserver-addons-20210817015042-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  kube-system                 kube-controller-manager-addons-20210817015042-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-proxy-88pjl                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-scheduler-addons-20210817015042-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 metrics-server-77c99ccb96-x8mh4                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (3%!)(MISSING)       0 (0%!)(MISSING)         5m10s
	  kube-system                 registry-9np4b                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 registry-proxy-p5xh8                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 snapshot-controller-989f9ddc8-rcswn                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 snapshot-controller-989f9ddc8-zqgfr                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 storage-provisioner                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  olm                         catalog-operator-75d496484d-86xl7                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (1%!)(MISSING)        0 (0%!)(MISSING)         5m5s
	  olm                         olm-operator-859c88c96-j28dd                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1070m (53%!)(MISSING)  100m (5%!)(MISSING)
	  memory             850Mi (10%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m37s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m37s (x4 over 5m37s)  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m37s (x3 over 5m37s)  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m37s (x3 over 5m37s)  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m37s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m19s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m19s                  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s                  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s                  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m12s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                4m38s                  kubelet     Node addons-20210817015042-1554185 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug17 01:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] <==
	* 2021-08-17 01:53:02.861567 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:53:12.861280 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:53:22.861862 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:53:32.861111 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:53:42.861909 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:53:52.861529 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:54:02.861144 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:54:12.862141 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:54:22.861863 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:54:32.861282 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:54:42.861393 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:54:52.861369 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:55:02.861240 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:55:12.861369 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:55:22.861288 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:55:32.861557 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:55:42.861576 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:55:52.861477 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:56:02.861926 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:56:12.861201 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:56:22.861178 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:56:32.862014 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:56:42.861753 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:56:52.861173 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 01:57:02.862023 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  01:57:11 up  9:39,  0 users,  load average: 0.23, 0.97, 1.59
	Linux addons-20210817015042-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] <==
	* I0817 01:52:55.832889       1 client.go:360] parsed scheme: "passthrough"
	I0817 01:52:55.832923       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 01:52:55.832931       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0817 01:53:03.427755       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.10.22:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.10.22:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.10.22:443: connect: connection refused
	E0817 01:53:03.428609       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.10.22:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.10.22:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.10.22:443: connect: connection refused
	E0817 01:53:03.433912       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.10.22:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.10.22:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.10.22:443: connect: connection refused
	E0817 01:53:03.455390       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.10.22:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.10.22:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.10.22:443: connect: connection refused
	I0817 01:53:35.806901       1 client.go:360] parsed scheme: "passthrough"
	I0817 01:53:35.806941       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 01:53:35.807072       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 01:54:20.034893       1 client.go:360] parsed scheme: "passthrough"
	I0817 01:54:20.034933       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 01:54:20.034942       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 01:54:51.358501       1 client.go:360] parsed scheme: "passthrough"
	I0817 01:54:51.358539       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 01:54:51.358659       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 01:55:29.816684       1 client.go:360] parsed scheme: "passthrough"
	I0817 01:55:29.816726       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 01:55:29.816738       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 01:56:08.616283       1 client.go:360] parsed scheme: "passthrough"
	I0817 01:56:08.616323       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 01:56:08.616331       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 01:56:48.365117       1 client.go:360] parsed scheme: "passthrough"
	I0817 01:56:48.365157       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 01:56:48.365285       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] <==
	* I0817 01:52:07.254609       1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-resizer" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful"
	I0817 01:52:07.308091       1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-snapshotter" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful"
	E0817 01:52:27.209508       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0817 01:52:27.209802       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com
	I0817 01:52:27.209831       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for volumesnapshots.snapshot.storage.k8s.io
	I0817 01:52:27.209868       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com
	I0817 01:52:27.209952       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
	I0817 01:52:27.209987       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com
	I0817 01:52:27.210060       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for installplans.operators.coreos.com
	I0817 01:52:27.211453       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0817 01:52:27.412457       1 shared_informer.go:247] Caches are synced for resource quota 
	W0817 01:52:27.565215       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 01:52:27.570191       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0817 01:52:27.585755       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0817 01:52:27.587067       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0817 01:52:27.788117       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 01:52:33.056834       1 event.go:291] "Event occurred" object="kube-system/registry-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: registry-proxy-p5xh8"
	E0817 01:52:33.075112       1 daemon_controller.go:320] kube-system/registry-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"registry-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"bbc76700-77ff-4df0-928a-e381ef3cf185", ResourceVersion:"486", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764761920, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "kubernetes.io/minikube-addons":"registry"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"kubernetes.io/minikube-addons\":\"registry\"},\"name\":\"regist
ry-proxy\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"kubernetes.io/minikube-addons\":\"registry\",\"registry-proxy\":\"true\"}},\"template\":{\"metadata\":{\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"kubernetes.io/minikube-addons\":\"registry\",\"registry-proxy\":\"true\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"REGISTRY_HOST\",\"value\":\"registry.kube-system.svc.cluster.local\"},{\"name\":\"REGISTRY_PORT\",\"value\":\"80\"}],\"image\":\"gcr.io/google_containers/kube-registry-proxy:0.4@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"registry-proxy\",\"ports\":[{\"containerPort\":80,\"hostPort\":5000,\"name\":\"registry\"}]}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d7e
000), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d7e018)}, v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d7e030), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d7e048)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001b3d3e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "kubernetes.io/minikube-addons":"registry", "registry-proxy":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(n
il), Containers:[]v1.Container{v1.Container{Name:"registry-proxy", Image:"gcr.io/google_containers/kube-registry-proxy:0.4@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"registry", HostPort:5000, ContainerPort:80, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"REGISTRY_HOST", Value:"registry.kube-system.svc.cluster.local", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"REGISTRY_PORT", Value:"80", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPre
sent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001d2d158), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004f7d50), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:
v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001d63790)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001d2d16c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "registry-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0817 01:52:36.883695       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0817 01:52:56.050693       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0817 01:52:56.851746       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0817 01:52:57.251384       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	E0817 01:52:57.435870       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0817 01:52:57.652302       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	W0817 01:52:57.808910       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] <==
	* I0817 01:51:59.199305       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 01:51:59.199348       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 01:51:59.199381       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 01:51:59.228513       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 01:51:59.228548       1 server_others.go:212] Using iptables Proxier.
	I0817 01:51:59.228558       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 01:51:59.228568       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 01:51:59.229489       1 server.go:643] Version: v1.21.3
	I0817 01:51:59.234867       1 config.go:315] Starting service config controller
	I0817 01:51:59.234890       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 01:51:59.236683       1 config.go:224] Starting endpoint slice config controller
	I0817 01:51:59.236698       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 01:51:59.242351       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 01:51:59.243149       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 01:51:59.338912       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 01:51:59.338971       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] <==
	* W0817 01:51:41.468231       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 01:51:41.468338       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 01:51:41.468430       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 01:51:41.611648       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 01:51:41.615019       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 01:51:41.616612       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0817 01:51:41.616756       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 01:51:41.622145       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 01:51:41.624737       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 01:51:41.627800       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 01:51:41.628373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 01:51:41.628434       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 01:51:41.628492       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 01:51:41.628547       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 01:51:41.628600       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:41.628650       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:41.628702       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:41.628754       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 01:51:41.628805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 01:51:41.630964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 01:51:41.631026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:42.555258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 01:51:42.563233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 01:51:42.595603       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 01:51:44.616129       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 01:50:50 UTC, end at Tue 2021-08-17 01:57:11 UTC. --
	Aug 17 01:56:29 addons-20210817015042-1554185 kubelet[1147]: E0817 01:56:29.896348    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 01:56:30 addons-20210817015042-1554185 kubelet[1147]: I0817 01:56:30.895751    1147 scope.go:111] "RemoveContainer" containerID="31c3ce405e0a9b68b92a5061ea1126abafef211b52df1fd251335a0426e88c7a"
	Aug 17 01:56:30 addons-20210817015042-1554185 kubelet[1147]: E0817 01:56:30.896153    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-p5xh8_kube-system(0a7638e5-9e17-4626-aeb9-b7fe2abe695d)\"" pod="kube-system/registry-proxy-p5xh8" podUID=0a7638e5-9e17-4626-aeb9-b7fe2abe695d
	Aug 17 01:56:39 addons-20210817015042-1554185 kubelet[1147]: I0817 01:56:39.896205    1147 scope.go:111] "RemoveContainer" containerID="44d2639884aff74953e7c7b135413e7ffa2b2a00a0b9409b26717647d1e681df"
	Aug 17 01:56:39 addons-20210817015042-1554185 kubelet[1147]: E0817 01:56:39.897006    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	Aug 17 01:56:42 addons-20210817015042-1554185 kubelet[1147]: I0817 01:56:42.896321    1147 scope.go:111] "RemoveContainer" containerID="31c3ce405e0a9b68b92a5061ea1126abafef211b52df1fd251335a0426e88c7a"
	Aug 17 01:56:42 addons-20210817015042-1554185 kubelet[1147]: E0817 01:56:42.896976    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-p5xh8_kube-system(0a7638e5-9e17-4626-aeb9-b7fe2abe695d)\"" pod="kube-system/registry-proxy-p5xh8" podUID=0a7638e5-9e17-4626-aeb9-b7fe2abe695d
	Aug 17 01:56:44 addons-20210817015042-1554185 kubelet[1147]: I0817 01:56:44.895804    1147 scope.go:111] "RemoveContainer" containerID="abba6af8e7f336628c35efe6e0ec6e85129d96d69f1c001593a758a89b1bb001"
	Aug 17 01:56:44 addons-20210817015042-1554185 kubelet[1147]: E0817 01:56:44.896186    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 01:56:53 addons-20210817015042-1554185 kubelet[1147]: I0817 01:56:53.896319    1147 scope.go:111] "RemoveContainer" containerID="31c3ce405e0a9b68b92a5061ea1126abafef211b52df1fd251335a0426e88c7a"
	Aug 17 01:56:53 addons-20210817015042-1554185 kubelet[1147]: E0817 01:56:53.896588    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-p5xh8_kube-system(0a7638e5-9e17-4626-aeb9-b7fe2abe695d)\"" pod="kube-system/registry-proxy-p5xh8" podUID=0a7638e5-9e17-4626-aeb9-b7fe2abe695d
	Aug 17 01:56:54 addons-20210817015042-1554185 kubelet[1147]: I0817 01:56:54.898740    1147 scope.go:111] "RemoveContainer" containerID="44d2639884aff74953e7c7b135413e7ffa2b2a00a0b9409b26717647d1e681df"
	Aug 17 01:56:54 addons-20210817015042-1554185 kubelet[1147]: E0817 01:56:54.900241    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	Aug 17 01:56:55 addons-20210817015042-1554185 kubelet[1147]: I0817 01:56:55.896075    1147 scope.go:111] "RemoveContainer" containerID="abba6af8e7f336628c35efe6e0ec6e85129d96d69f1c001593a758a89b1bb001"
	Aug 17 01:56:55 addons-20210817015042-1554185 kubelet[1147]: E0817 01:56:55.896472    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 01:57:06 addons-20210817015042-1554185 kubelet[1147]: I0817 01:57:06.896072    1147 scope.go:111] "RemoveContainer" containerID="abba6af8e7f336628c35efe6e0ec6e85129d96d69f1c001593a758a89b1bb001"
	Aug 17 01:57:06 addons-20210817015042-1554185 kubelet[1147]: E0817 01:57:06.896476    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 01:57:06 addons-20210817015042-1554185 kubelet[1147]: I0817 01:57:06.896899    1147 scope.go:111] "RemoveContainer" containerID="31c3ce405e0a9b68b92a5061ea1126abafef211b52df1fd251335a0426e88c7a"
	Aug 17 01:57:06 addons-20210817015042-1554185 kubelet[1147]: E0817 01:57:06.897123    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-p5xh8_kube-system(0a7638e5-9e17-4626-aeb9-b7fe2abe695d)\"" pod="kube-system/registry-proxy-p5xh8" podUID=0a7638e5-9e17-4626-aeb9-b7fe2abe695d
	Aug 17 01:57:09 addons-20210817015042-1554185 kubelet[1147]: I0817 01:57:09.895640    1147 scope.go:111] "RemoveContainer" containerID="44d2639884aff74953e7c7b135413e7ffa2b2a00a0b9409b26717647d1e681df"
	Aug 17 01:57:09 addons-20210817015042-1554185 kubelet[1147]: E0817 01:57:09.896030    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	Aug 17 01:57:10 addons-20210817015042-1554185 kubelet[1147]: I0817 01:57:10.962538    1147 scope.go:111] "RemoveContainer" containerID="450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532"
	Aug 17 01:57:10 addons-20210817015042-1554185 kubelet[1147]: I0817 01:57:10.990446    1147 scope.go:111] "RemoveContainer" containerID="450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532"
	Aug 17 01:57:10 addons-20210817015042-1554185 kubelet[1147]: E0817 01:57:10.993449    1147 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532\": not found" containerID="450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532"
	Aug 17 01:57:10 addons-20210817015042-1554185 kubelet[1147]: I0817 01:57:10.993496    1147 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532} err="failed to get container status \"450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532\": rpc error: code = NotFound desc = an error occurred when try to find container \"450c01a001de15451d8c0530661b6988628b3dbca89731873724a7150a349532\": not found"
	
	* 
	* ==> storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] <==
	* I0817 01:52:45.168349       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 01:52:45.223745       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 01:52:45.226921       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 01:52:45.243264       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 01:52:45.243748       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"41860dbd-59f4-40f3-b06c-d38f89989bf1", APIVersion:"v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210817015042-1554185_09f87493-b972-47f4-9131-044097fd5d01 became leader
	I0817 01:52:45.243789       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210817015042-1554185_09f87493-b972-47f4-9131-044097fd5d01!
	I0817 01:52:45.346906       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210817015042-1554185_09f87493-b972-47f4-9131-044097fd5d01!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210817015042-1554185 -n addons-20210817015042-1554185
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210817015042-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: gcp-auth-certs-create-zlhwv gcp-auth-certs-patch-jwk95 ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210817015042-1554185 describe pod gcp-auth-certs-create-zlhwv gcp-auth-certs-patch-jwk95 ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210817015042-1554185 describe pod gcp-auth-certs-create-zlhwv gcp-auth-certs-patch-jwk95 ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j: exit status 1 (77.186016ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-zlhwv" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-jwk95" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-msw6w" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xpb6j" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210817015042-1554185 describe pod gcp-auth-certs-create-zlhwv gcp-auth-certs-patch-jwk95 ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j: exit status 1
--- FAIL: TestAddons/parallel/Registry (237.38s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (243.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:343: "ingress-nginx-admission-create-msw6w" [0d1414d6-6d5d-4531-81bd-038397450562] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 5.796619ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210817015042-1554185 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210817015042-1554185 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [ed08e79b-1781-4708-8f29-d5b69cc3c7c6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 4m0s: timed out waiting for the condition ****
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210817015042-1554185 -n addons-20210817015042-1554185
addons_test.go:185: TestAddons/parallel/Ingress: showing logs for failed pods as of 2021-08-17 02:07:59.119764623 +0000 UTC m=+1109.914512423
addons_test.go:185: (dbg) Run:  kubectl --context addons-20210817015042-1554185 describe po nginx -n default
addons_test.go:185: (dbg) kubectl --context addons-20210817015042-1554185 describe po nginx -n default:
Name:         nginx
Namespace:    default
Priority:     0
Node:         addons-20210817015042-1554185/192.168.49.2
Start Time:   Tue, 17 Aug 2021 02:03:58 +0000
Labels:       run=nginx
Annotations:  <none>
Status:       Pending
IP:           10.244.0.24
IPs:
IP:  10.244.0.24
Containers:
nginx:
Container ID:   
Image:          nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nnxzp (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-nnxzp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  4m1s                   default-scheduler  Successfully assigned default/nginx to addons-20210817015042-1554185
Normal   Pulling    2m35s (x4 over 4m)     kubelet            Pulling image "nginx:alpine"
Warning  Failed     2m34s (x4 over 3m59s)  kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m34s (x4 over 3m59s)  kubelet            Error: ErrImagePull
Warning  Failed     2m6s (x6 over 3m59s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    114s (x7 over 3m59s)   kubelet            Back-off pulling image "nginx:alpine"
addons_test.go:185: (dbg) Run:  kubectl --context addons-20210817015042-1554185 logs nginx -n default
addons_test.go:185: (dbg) Non-zero exit: kubectl --context addons-20210817015042-1554185 logs nginx -n default: exit status 1 (98.781749ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:185: kubectl --context addons-20210817015042-1554185 logs nginx -n default: exit status 1
addons_test.go:186: failed waiting for ngnix pod: run=nginx within 4m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210817015042-1554185
helpers_test.go:236: (dbg) docker inspect addons-20210817015042-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416",
	        "Created": "2021-08-17T01:50:49.008425565Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1555108,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T01:50:49.513909075Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/hosts",
	        "LogPath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416-json.log",
	        "Name": "/addons-20210817015042-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210817015042-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210817015042-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61/merged",
	                "UpperDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61/diff",
	                "WorkDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210817015042-1554185",
	                "Source": "/var/lib/docker/volumes/addons-20210817015042-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210817015042-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210817015042-1554185",
	                "name.minikube.sigs.k8s.io": "addons-20210817015042-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3e0a22fba78ee7873eb198b4450cb747bf4f2dc90aa87985648e04a1bfa9520",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50314"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50313"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50310"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50312"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50311"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c3e0a22fba78",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210817015042-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d0219469219e",
	                        "addons-20210817015042-1554185"
	                    ],
	                    "NetworkID": "a9a617dbec2c4687c7bfc4bea262a36b8329d70029602dc944aed84d4dfb4f83",
	                    "EndpointID": "dad39de7953aad4709a05c2c9027de032d29f0302e6751762f5bb275759d2909",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-20210817015042-1554185 -n addons-20210817015042-1554185
helpers_test.go:245: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210817015042-1554185 logs -n 25
helpers_test.go:253: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                  | download-only-20210817014929-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:28 UTC | Tue, 17 Aug 2021 01:50:28 UTC |
	| delete  | -p                                     | download-only-20210817014929-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:28 UTC | Tue, 17 Aug 2021 01:50:28 UTC |
	|         | download-only-20210817014929-1554185   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-only-20210817014929-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:28 UTC | Tue, 17 Aug 2021 01:50:28 UTC |
	|         | download-only-20210817014929-1554185   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-docker-20210817015028-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:42 UTC | Tue, 17 Aug 2021 01:50:42 UTC |
	|         | download-docker-20210817015028-1554185 |                                        |         |         |                               |                               |
	| start   | -p                                     | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:43 UTC | Tue, 17 Aug 2021 01:53:14 UTC |
	|         | addons-20210817015042-1554185          |                                        |         |         |                               |                               |
	|         | --wait=true --memory=4000              |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --addons=registry                      |                                        |         |         |                               |                               |
	|         | --addons=metrics-server                |                                        |         |         |                               |                               |
	|         | --addons=olm                           |                                        |         |         |                               |                               |
	|         | --addons=volumesnapshots               |                                        |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver           |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --addons=ingress                       |                                        |         |         |                               |                               |
	|         | --addons=gcp-auth                      |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:54:25 UTC | Tue, 17 Aug 2021 01:54:25 UTC |
	|         | ip                                     |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:57:09 UTC | Tue, 17 Aug 2021 01:57:10 UTC |
	|         | addons disable registry                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:57:10 UTC | Tue, 17 Aug 2021 01:57:11 UTC |
	|         | logs -n 25                             |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:57:21 UTC | Tue, 17 Aug 2021 01:57:48 UTC |
	|         | addons disable gcp-auth                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:03:49 UTC | Tue, 17 Aug 2021 02:03:51 UTC |
	|         | logs -n 25                             |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:03:57 UTC | Tue, 17 Aug 2021 02:03:57 UTC |
	|         | addons disable metrics-server          |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:05:25 UTC | Tue, 17 Aug 2021 02:05:26 UTC |
	|         | logs -n 25                             |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 01:50:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 01:50:43.004283 1554672 out.go:298] Setting OutFile to fd 1 ...
	I0817 01:50:43.004408 1554672 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:50:43.004431 1554672 out.go:311] Setting ErrFile to fd 2...
	I0817 01:50:43.004441 1554672 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:50:43.004581 1554672 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 01:50:43.004871 1554672 out.go:305] Setting JSON to false
	I0817 01:50:43.005775 1554672 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34381,"bootTime":1629130662,"procs":416,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 01:50:43.005843 1554672 start.go:121] virtualization:  
	I0817 01:50:43.008113 1554672 out.go:177] * [addons-20210817015042-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 01:50:43.010059 1554672 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 01:50:43.009081 1554672 notify.go:169] Checking for updates...
	I0817 01:50:43.011571 1554672 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 01:50:43.013130 1554672 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 01:50:43.014848 1554672 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 01:50:43.015025 1554672 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 01:50:43.049197 1554672 docker.go:132] docker version: linux-20.10.8
	I0817 01:50:43.049279 1554672 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:50:43.144133 1554672 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:50:43.088038469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:50:43.144227 1554672 docker.go:244] overlay module found
	I0817 01:50:43.146324 1554672 out.go:177] * Using the docker driver based on user configuration
	I0817 01:50:43.146348 1554672 start.go:278] selected driver: docker
	I0817 01:50:43.146353 1554672 start.go:751] validating driver "docker" against <nil>
	I0817 01:50:43.146367 1554672 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 01:50:43.146408 1554672 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 01:50:43.146423 1554672 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 01:50:43.147842 1554672 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 01:50:43.148132 1554672 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:50:43.222251 1554672 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:50:43.17341921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:50:43.222365 1554672 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0817 01:50:43.222521 1554672 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 01:50:43.222542 1554672 cni.go:93] Creating CNI manager for ""
	I0817 01:50:43.222549 1554672 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:50:43.222565 1554672 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 01:50:43.222570 1554672 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 01:50:43.222582 1554672 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 01:50:43.222589 1554672 start_flags.go:277] config:
	{Name:addons-20210817015042-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 01:50:43.224429 1554672 out.go:177] * Starting control plane node addons-20210817015042-1554185 in cluster addons-20210817015042-1554185
	I0817 01:50:43.224467 1554672 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 01:50:43.226166 1554672 out.go:177] * Pulling base image ...
	I0817 01:50:43.226186 1554672 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:50:43.226218 1554672 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 01:50:43.226230 1554672 cache.go:56] Caching tarball of preloaded images
	I0817 01:50:43.226359 1554672 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 01:50:43.226380 1554672 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 01:50:43.226662 1554672 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/config.json ...
	I0817 01:50:43.226688 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/config.json: {Name:mk832a7647425177a5f2be8874629457bb58883b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:50:43.226846 1554672 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 01:50:43.267020 1554672 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 01:50:43.267048 1554672 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 01:50:43.267060 1554672 cache.go:205] Successfully downloaded all kic artifacts
	I0817 01:50:43.267095 1554672 start.go:313] acquiring machines lock for addons-20210817015042-1554185: {Name:mkc848aa47e63f497fa6d048b39bc33e9d106216 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 01:50:43.267208 1554672 start.go:317] acquired machines lock for "addons-20210817015042-1554185" in 92.061µs
	I0817 01:50:43.267235 1554672 start.go:89] Provisioning new machine with config: &{Name:addons-20210817015042-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 01:50:43.267309 1554672 start.go:126] createHost starting for "" (driver="docker")
	I0817 01:50:43.269344 1554672 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0817 01:50:43.269558 1554672 start.go:160] libmachine.API.Create for "addons-20210817015042-1554185" (driver="docker")
	I0817 01:50:43.269585 1554672 client.go:168] LocalClient.Create starting
	I0817 01:50:43.269667 1554672 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0817 01:50:43.834992 1554672 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0817 01:50:44.271080 1554672 cli_runner.go:115] Run: docker network inspect addons-20210817015042-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 01:50:44.298072 1554672 cli_runner.go:162] docker network inspect addons-20210817015042-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 01:50:44.298133 1554672 network_create.go:255] running [docker network inspect addons-20210817015042-1554185] to gather additional debugging logs...
	I0817 01:50:44.298149 1554672 cli_runner.go:115] Run: docker network inspect addons-20210817015042-1554185
	W0817 01:50:44.324372 1554672 cli_runner.go:162] docker network inspect addons-20210817015042-1554185 returned with exit code 1
	I0817 01:50:44.324396 1554672 network_create.go:258] error running [docker network inspect addons-20210817015042-1554185]: docker network inspect addons-20210817015042-1554185: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210817015042-1554185
	I0817 01:50:44.324409 1554672 network_create.go:260] output of [docker network inspect addons-20210817015042-1554185]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210817015042-1554185
	
	** /stderr **
	I0817 01:50:44.324473 1554672 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 01:50:44.351093 1554672 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x40005be280] misses:0}
	I0817 01:50:44.351140 1554672 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 01:50:44.351162 1554672 network_create.go:106] attempt to create docker network addons-20210817015042-1554185 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 01:50:44.351211 1554672 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210817015042-1554185
	I0817 01:50:44.413803 1554672 network_create.go:90] docker network addons-20210817015042-1554185 192.168.49.0/24 created
	I0817 01:50:44.413829 1554672 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210817015042-1554185" container
	I0817 01:50:44.413892 1554672 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0817 01:50:44.440106 1554672 cli_runner.go:115] Run: docker volume create addons-20210817015042-1554185 --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --label created_by.minikube.sigs.k8s.io=true
	I0817 01:50:44.467518 1554672 oci.go:102] Successfully created a docker volume addons-20210817015042-1554185
	I0817 01:50:44.467581 1554672 cli_runner.go:115] Run: docker run --rm --name addons-20210817015042-1554185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --entrypoint /usr/bin/test -v addons-20210817015042-1554185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0817 01:50:48.841251 1554672 cli_runner.go:168] Completed: docker run --rm --name addons-20210817015042-1554185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --entrypoint /usr/bin/test -v addons-20210817015042-1554185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (4.373634594s)
	I0817 01:50:48.841276 1554672 oci.go:106] Successfully prepared a docker volume addons-20210817015042-1554185
	W0817 01:50:48.841301 1554672 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0817 01:50:48.841310 1554672 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0817 01:50:48.841360 1554672 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 01:50:48.841549 1554672 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:50:48.841570 1554672 kic.go:179] Starting extracting preloaded images to volume ...
	I0817 01:50:48.841627 1554672 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210817015042-1554185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 01:50:48.971581 1554672 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210817015042-1554185 --name addons-20210817015042-1554185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210817015042-1554185 --network addons-20210817015042-1554185 --ip 192.168.49.2 --volume addons-20210817015042-1554185:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0817 01:50:49.523596 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Running}}
	I0817 01:50:49.590786 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:50:49.633119 1554672 cli_runner.go:115] Run: docker exec addons-20210817015042-1554185 stat /var/lib/dpkg/alternatives/iptables
	I0817 01:50:49.741896 1554672 oci.go:278] the created container "addons-20210817015042-1554185" has a running status.
	I0817 01:50:49.741921 1554672 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa...
	I0817 01:50:50.532064 1554672 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 01:50:50.667778 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:50:50.707368 1554672 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 01:50:50.707384 1554672 kic_runner.go:115] Args: [docker exec --privileged addons-20210817015042-1554185 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 01:51:00.466206 1554672 kic_runner.go:124] Done: [docker exec --privileged addons-20210817015042-1554185 chown docker:docker /home/docker/.ssh/authorized_keys]: (9.758798263s)
	I0817 01:51:02.783214 1554672 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210817015042-1554185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (13.941553277s)
	I0817 01:51:02.783245 1554672 kic.go:188] duration metric: took 13.941672 seconds to extract preloaded images to volume
	I0817 01:51:02.783324 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:02.814748 1554672 machine.go:88] provisioning docker machine ...
	I0817 01:51:02.814781 1554672 ubuntu.go:169] provisioning hostname "addons-20210817015042-1554185"
	I0817 01:51:02.814865 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:02.842333 1554672 main.go:130] libmachine: Using SSH client type: native
	I0817 01:51:02.842498 1554672 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50314 <nil> <nil>}
	I0817 01:51:02.842516 1554672 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210817015042-1554185 && echo "addons-20210817015042-1554185" | sudo tee /etc/hostname
	I0817 01:51:02.970606 1554672 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210817015042-1554185
	
	I0817 01:51:02.970693 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:02.999373 1554672 main.go:130] libmachine: Using SSH client type: native
	I0817 01:51:02.999533 1554672 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50314 <nil> <nil>}
	I0817 01:51:02.999560 1554672 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210817015042-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210817015042-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210817015042-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 01:51:03.114034 1554672 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 01:51:03.114055 1554672 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 01:51:03.114074 1554672 ubuntu.go:177] setting up certificates
	I0817 01:51:03.114082 1554672 provision.go:83] configureAuth start
	I0817 01:51:03.114135 1554672 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210817015042-1554185
	I0817 01:51:03.141579 1554672 provision.go:138] copyHostCerts
	I0817 01:51:03.141653 1554672 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 01:51:03.141736 1554672 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 01:51:03.141784 1554672 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 01:51:03.141822 1554672 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.addons-20210817015042-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210817015042-1554185]
	I0817 01:51:03.398920 1554672 provision.go:172] copyRemoteCerts
	I0817 01:51:03.398968 1554672 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 01:51:03.399007 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.426820 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.508566 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 01:51:03.525114 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0817 01:51:03.539071 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 01:51:03.553109 1554672 provision.go:86] duration metric: configureAuth took 439.012307ms
	I0817 01:51:03.553124 1554672 ubuntu.go:193] setting minikube options for container-runtime
	I0817 01:51:03.553268 1554672 config.go:177] Loaded profile config "addons-20210817015042-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 01:51:03.553275 1554672 machine.go:91] provisioned docker machine in 738.505134ms
	I0817 01:51:03.553280 1554672 client.go:171] LocalClient.Create took 20.283690224s
	I0817 01:51:03.553289 1554672 start.go:168] duration metric: libmachine.API.Create for "addons-20210817015042-1554185" took 20.283731225s
	I0817 01:51:03.553296 1554672 start.go:267] post-start starting for "addons-20210817015042-1554185" (driver="docker")
	I0817 01:51:03.553301 1554672 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 01:51:03.553340 1554672 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 01:51:03.553372 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.581866 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.664711 1554672 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 01:51:03.667021 1554672 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 01:51:03.667044 1554672 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 01:51:03.667055 1554672 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 01:51:03.667073 1554672 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 01:51:03.667081 1554672 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 01:51:03.667131 1554672 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 01:51:03.667155 1554672 start.go:270] post-start completed in 113.85344ms
	I0817 01:51:03.667437 1554672 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210817015042-1554185
	I0817 01:51:03.695177 1554672 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/config.json ...
	I0817 01:51:03.695366 1554672 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 01:51:03.695414 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.722965 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.802744 1554672 start.go:129] duration metric: createHost completed in 20.535424588s
	I0817 01:51:03.802761 1554672 start.go:80] releasing machines lock for "addons-20210817015042-1554185", held for 20.535539837s
	I0817 01:51:03.802834 1554672 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210817015042-1554185
	I0817 01:51:03.830388 1554672 ssh_runner.go:149] Run: systemctl --version
	I0817 01:51:03.830437 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.830658 1554672 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 01:51:03.830713 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.864441 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.872939 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.950680 1554672 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 01:51:04.148514 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 01:51:04.156921 1554672 docker.go:153] disabling docker service ...
	I0817 01:51:04.156964 1554672 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 01:51:04.172287 1554672 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 01:51:04.180567 1554672 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 01:51:04.253873 1554672 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 01:51:04.337794 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 01:51:04.346079 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 01:51:04.356986 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 01:51:04.369213 1554672 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 01:51:04.375739 1554672 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 01:51:04.381264 1554672 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 01:51:04.455762 1554672 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 01:51:04.531663 1554672 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 01:51:04.531729 1554672 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 01:51:04.535130 1554672 start.go:413] Will wait 60s for crictl version
	I0817 01:51:04.535189 1554672 ssh_runner.go:149] Run: sudo crictl version
	I0817 01:51:04.564551 1554672 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T01:51:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 01:51:15.611398 1554672 ssh_runner.go:149] Run: sudo crictl version
	I0817 01:51:15.634965 1554672 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 01:51:15.635034 1554672 ssh_runner.go:149] Run: containerd --version
	I0817 01:51:15.656211 1554672 ssh_runner.go:149] Run: containerd --version
	I0817 01:51:15.679165 1554672 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 01:51:15.679262 1554672 cli_runner.go:115] Run: docker network inspect addons-20210817015042-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 01:51:15.708112 1554672 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 01:51:15.711074 1554672 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 01:51:15.720057 1554672 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:51:15.720115 1554672 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 01:51:15.753630 1554672 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 01:51:15.753654 1554672 containerd.go:517] Images already preloaded, skipping extraction
	I0817 01:51:15.753696 1554672 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 01:51:15.775284 1554672 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 01:51:15.775306 1554672 cache_images.go:74] Images are preloaded, skipping loading
	I0817 01:51:15.775376 1554672 ssh_runner.go:149] Run: sudo crictl info
	I0817 01:51:15.796264 1554672 cni.go:93] Creating CNI manager for ""
	I0817 01:51:15.796286 1554672 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:51:15.796297 1554672 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 01:51:15.796310 1554672 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210817015042-1554185 NodeName:addons-20210817015042-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 01:51:15.796446 1554672 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "addons-20210817015042-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 01:51:15.796533 1554672 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-20210817015042-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 01:51:15.796591 1554672 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 01:51:15.802721 1554672 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 01:51:15.802788 1554672 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 01:51:15.808456 1554672 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (574 bytes)
	I0817 01:51:15.819782 1554672 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 01:51:15.830993 1554672 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0817 01:51:15.841895 1554672 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 01:51:15.844431 1554672 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 01:51:15.852834 1554672 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185 for IP: 192.168.49.2
	I0817 01:51:15.852892 1554672 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 01:51:16.232897 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt ...
	I0817 01:51:16.232924 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt: {Name:mkc452a3ca463d1cef7aa1398b1abd9dddd24545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.233112 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key ...
	I0817 01:51:16.233129 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key: {Name:mkb1c0cc6e35e952c8fa312da56d58ae26957187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.233218 1554672 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 01:51:16.929155 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt ...
	I0817 01:51:16.929187 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt: {Name:mk17a5a660a62b953e570d93eac621069f930efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.929368 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key ...
	I0817 01:51:16.929384 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key: {Name:mk40bf80fb6d166c627fea37bd45ce901649a411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.929516 1554672 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.key
	I0817 01:51:16.929537 1554672 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt with IP's: []
	I0817 01:51:17.141841 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt ...
	I0817 01:51:17.141869 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: {Name:mk127978c85cd8b22e7e4466afd86c3104950f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.142041 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.key ...
	I0817 01:51:17.142056 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.key: {Name:mk9c80a73b58e8a5fc9e3f4aca38da7b4d098319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.142143 1554672 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2
	I0817 01:51:17.142152 1554672 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 01:51:17.697755 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2 ...
	I0817 01:51:17.697786 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2: {Name:mk68739490e6778fecd80380c013c3c92d6d4458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.698773 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2 ...
	I0817 01:51:17.698790 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2: {Name:mk844eac0cbe48c9235e9d8a8ec3aa0d9a836734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.698894 1554672 certs.go:308] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt
	I0817 01:51:17.698954 1554672 certs.go:312] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key
	I0817 01:51:17.699002 1554672 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key
	I0817 01:51:17.699012 1554672 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt with IP's: []
	I0817 01:51:18.551109 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt ...
	I0817 01:51:18.551144 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt: {Name:mkb606f4652991a4936ad1fb4f336e911d7af05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:18.551327 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key ...
	I0817 01:51:18.551342 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key: {Name:mkddf28d3df3bc53b2858cabdc2cbc08941228fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:18.551516 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 01:51:18.551557 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 01:51:18.551586 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 01:51:18.551613 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 01:51:18.554164 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 01:51:18.569715 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 01:51:18.584425 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 01:51:18.598873 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 01:51:18.613294 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 01:51:18.627638 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 01:51:18.642450 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 01:51:18.657110 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 01:51:18.671462 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 01:51:18.686137 1554672 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 01:51:18.696974 1554672 ssh_runner.go:149] Run: openssl version
	I0817 01:51:18.701232 1554672 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 01:51:18.707560 1554672 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 01:51:18.710430 1554672 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 01:51:18.710492 1554672 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 01:51:18.714912 1554672 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 01:51:18.721735 1554672 kubeadm.go:390] StartCluster: {Name:addons-20210817015042-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 01:51:18.721819 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 01:51:18.721874 1554672 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 01:51:18.749686 1554672 cri.go:76] found id: ""
	I0817 01:51:18.749758 1554672 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 01:51:18.755843 1554672 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 01:51:18.761633 1554672 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 01:51:18.761681 1554672 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 01:51:18.767360 1554672 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 01:51:18.767403 1554672 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 01:51:44.489308 1554672 out.go:204]   - Generating certificates and keys ...
	I0817 01:51:44.492258 1554672 out.go:204]   - Booting up control plane ...
	I0817 01:51:44.495405 1554672 out.go:204]   - Configuring RBAC rules ...
	I0817 01:51:44.497771 1554672 cni.go:93] Creating CNI manager for ""
	I0817 01:51:44.497802 1554672 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:51:44.499744 1554672 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 01:51:44.499923 1554672 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 01:51:44.514536 1554672 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 01:51:44.514555 1554672 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 01:51:44.537490 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 01:51:45.293207 1554672 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 01:51:45.293283 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:45.293354 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=addons-20210817015042-1554185 minikube.k8s.io/updated_at=2021_08_17T01_51_45_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:45.441142 1554672 ops.go:34] apiserver oom_adj: -16
	I0817 01:51:45.441307 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:46.028917 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:46.528512 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:47.028526 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:47.529129 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:48.028453 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:48.529207 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:49.029151 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:49.528902 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:50.028980 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:50.528509 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:51.028493 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:51.528957 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:52.028487 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:52.529123 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:53.029078 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:53.528513 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:54.029046 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:54.529488 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:55.029473 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:55.529461 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:56.029173 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:56.529368 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:57.028522 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:57.528583 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:57.667234 1554672 kubeadm.go:985] duration metric: took 12.373989422s to wait for elevateKubeSystemPrivileges.
	I0817 01:51:57.667260 1554672 kubeadm.go:392] StartCluster complete in 38.945530358s
	I0817 01:51:57.667277 1554672 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:57.667387 1554672 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 01:51:57.667820 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:58.208648 1554672 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210817015042-1554185" rescaled to 1
	I0817 01:51:58.208753 1554672 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 01:51:58.211352 1554672 out.go:177] * Verifying Kubernetes components...
	I0817 01:51:58.211399 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 01:51:58.208813 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 01:51:58.208883 1554672 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I0817 01:51:58.211533 1554672 addons.go:59] Setting volumesnapshots=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.211549 1554672 addons.go:135] Setting addon volumesnapshots=true in "addons-20210817015042-1554185"
	I0817 01:51:58.211576 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.212086 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.212136 1554672 addons.go:59] Setting ingress=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.212151 1554672 addons.go:135] Setting addon ingress=true in "addons-20210817015042-1554185"
	I0817 01:51:58.212176 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.212566 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.212695 1554672 addons.go:59] Setting metrics-server=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.212709 1554672 addons.go:135] Setting addon metrics-server=true in "addons-20210817015042-1554185"
	I0817 01:51:58.212725 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.213112 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.213170 1554672 addons.go:59] Setting olm=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.213183 1554672 addons.go:135] Setting addon olm=true in "addons-20210817015042-1554185"
	I0817 01:51:58.213199 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.213584 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.213633 1554672 addons.go:59] Setting registry=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.213647 1554672 addons.go:135] Setting addon registry=true in "addons-20210817015042-1554185"
	I0817 01:51:58.213662 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.214028 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.214076 1554672 addons.go:59] Setting storage-provisioner=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.214086 1554672 addons.go:135] Setting addon storage-provisioner=true in "addons-20210817015042-1554185"
	W0817 01:51:58.214091 1554672 addons.go:147] addon storage-provisioner should already be in state true
	I0817 01:51:58.214110 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.214476 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.214533 1554672 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.214556 1554672 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210817015042-1554185"
	I0817 01:51:58.214578 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.216636 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.209046 1554672 config.go:177] Loaded profile config "addons-20210817015042-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 01:51:58.236087 1554672 addons.go:59] Setting default-storageclass=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.236110 1554672 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210817015042-1554185"
	I0817 01:51:58.236416 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.236497 1554672 addons.go:59] Setting gcp-auth=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.236511 1554672 mustload.go:65] Loading cluster: addons-20210817015042-1554185
	I0817 01:51:58.236645 1554672 config.go:177] Loaded profile config "addons-20210817015042-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 01:51:58.236850 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.386147 1554672 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0817 01:51:58.387928 1554672 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0817 01:51:58.390873 1554672 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0817 01:51:58.390923 1554672 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0817 01:51:58.390932 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0817 01:51:58.390988 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.537862 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0817 01:51:58.539571 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0817 01:51:58.541491 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0817 01:51:58.545257 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0817 01:51:58.547105 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0817 01:51:58.548521 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0817 01:51:58.548575 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0817 01:51:58.548588 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0817 01:51:58.550068 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0817 01:51:58.548640 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.554947 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0817 01:51:58.556578 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0817 01:51:58.558141 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0817 01:51:58.558186 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0817 01:51:58.558193 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0817 01:51:58.558233 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.584190 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:58.586499 1554672 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0817 01:51:58.586556 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 01:51:58.586564 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 01:51:58.586607 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.586985 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 01:51:58.588594 1554672 node_ready.go:35] waiting up to 6m0s for node "addons-20210817015042-1554185" to be "Ready" ...
	I0817 01:51:58.646058 1554672 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0817 01:51:58.647847 1554672 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0817 01:51:58.697505 1554672 out.go:177]   - Using image registry:2.7.1
	I0817 01:51:58.699108 1554672 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0817 01:51:58.699188 1554672 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0817 01:51:58.699196 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0817 01:51:58.699248 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.738566 1554672 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 01:51:58.738649 1554672 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 01:51:58.738662 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 01:51:58.738710 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.776023 1554672 addons.go:135] Setting addon default-storageclass=true in "addons-20210817015042-1554185"
	W0817 01:51:58.776048 1554672 addons.go:147] addon default-storageclass should already be in state true
	I0817 01:51:58.776074 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.776526 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.805011 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.875835 1554672 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0817 01:51:58.875902 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0817 01:51:58.876004 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.903641 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:58.916757 1554672 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0817 01:51:58.916831 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.922593 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:58.927465 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.028544 1554672 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 01:51:59.028564 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 01:51:59.028615 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:59.050931 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.052785 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.078548 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.103856 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.136455 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.164633 1554672 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0817 01:51:59.164654 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0817 01:51:59.294730 1554672 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0817 01:51:59.295256 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0817 01:51:59.334238 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0817 01:51:59.362229 1554672 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0817 01:51:59.362285 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0817 01:51:59.419723 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 01:51:59.419773 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0817 01:51:59.430396 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 01:51:59.439975 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0817 01:51:59.440022 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0817 01:51:59.457816 1554672 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0817 01:51:59.457862 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0817 01:51:59.478937 1554672 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0817 01:51:59.484531 1554672 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0817 01:51:59.484544 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0817 01:51:59.492438 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 01:51:59.516776 1554672 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0817 01:51:59.516819 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0817 01:51:59.533786 1554672 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0817 01:51:59.533830 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0817 01:51:59.538721 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 01:51:59.538765 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 01:51:59.551896 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0817 01:51:59.551933 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0817 01:51:59.577593 1554672 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0817 01:51:59.577637 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0817 01:51:59.586368 1554672 addons.go:135] Setting addon gcp-auth=true in "addons-20210817015042-1554185"
	I0817 01:51:59.586439 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:59.586992 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:59.602216 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0817 01:51:59.643183 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0817 01:51:59.643200 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0817 01:51:59.643758 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 01:51:59.643772 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 01:51:59.653214 1554672 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.6
	I0817 01:51:59.654750 1554672 out.go:177]   - Using image jettech/kube-webhook-certgen:v1.3.0
	I0817 01:51:59.654796 1554672 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0817 01:51:59.654803 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0817 01:51:59.654918 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:59.662952 1554672 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.075944471s)
	I0817 01:51:59.662971 1554672 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0817 01:51:59.675174 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0817 01:51:59.683436 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0817 01:51:59.683451 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0817 01:51:59.698631 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0817 01:51:59.698646 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0817 01:51:59.718898 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.744738 1554672 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 01:51:59.744759 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0817 01:51:59.753993 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0817 01:51:59.754011 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0817 01:51:59.768957 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 01:51:59.806478 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0817 01:51:59.806499 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0817 01:51:59.841542 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 01:51:59.896907 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0817 01:51:59.896928 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0817 01:52:00.017093 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0817 01:52:00.017114 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0817 01:52:00.098144 1554672 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0817 01:52:00.098165 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (770 bytes)
	I0817 01:52:00.148128 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0817 01:52:00.148150 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0817 01:52:00.212978 1554672 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 01:52:00.213000 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4755 bytes)
	I0817 01:52:00.240295 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0817 01:52:00.240316 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0817 01:52:00.328162 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 01:52:00.392480 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0817 01:52:00.392504 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0817 01:52:00.475797 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 01:52:00.475819 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0817 01:52:00.588665 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 01:52:00.613870 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:01.300912 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.870464987s)
	I0817 01:52:01.300955 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (1.966695194s)
	I0817 01:52:01.300964 1554672 addons.go:313] Verifying addon ingress=true in "addons-20210817015042-1554185"
	I0817 01:52:01.302793 1554672 out.go:177] * Verifying ingress addon...
	I0817 01:52:01.301217 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.808764624s)
	I0817 01:52:01.304580 1554672 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0817 01:52:01.324823 1554672 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0817 01:52:01.324869 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:01.866150 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:02.455106 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:02.756616 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:02.904020 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:03.374970 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:03.900784 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:04.389210 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:04.828604 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:05.113059 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:05.328501 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:05.828619 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:06.329237 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:06.849401 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.17419898s)
	I0817 01:52:06.849430 1554672 addons.go:313] Verifying addon registry=true in "addons-20210817015042-1554185"
	I0817 01:52:06.851610 1554672 out.go:177] * Verifying registry addon...
	I0817 01:52:06.849712 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.080729919s)
	I0817 01:52:06.849894 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (7.247660472s)
	I0817 01:52:06.850006 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.008434003s)
	I0817 01:52:06.850075 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (6.521890577s)
	I0817 01:52:06.853580 1554672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0817 01:52:06.853656 1554672 addons.go:313] Verifying addon metrics-server=true in "addons-20210817015042-1554185"
	W0817 01:52:06.853699 1554672 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0817 01:52:06.853907 1554672 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	W0817 01:52:06.853735 1554672 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0817 01:52:06.853958 1554672 retry.go:31] will retry after 291.140013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0817 01:52:06.853773 1554672 addons.go:313] Verifying addon gcp-auth=true in "addons-20210817015042-1554185"
	I0817 01:52:06.856170 1554672 out.go:177] * Verifying gcp-auth addon...
	I0817 01:52:06.858037 1554672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0817 01:52:06.879437 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:06.901505 1554672 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0817 01:52:06.901521 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:06.902116 1554672 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0817 01:52:06.902127 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:07.114493 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:07.145764 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 01:52:07.214608 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0817 01:52:07.318415 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.729707072s)
	I0817 01:52:07.318482 1554672 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210817015042-1554185"
	I0817 01:52:07.320343 1554672 out.go:177] * Verifying csi-hostpath-driver addon...
	I0817 01:52:07.322240 1554672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0817 01:52:07.329026 1554672 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0817 01:52:07.329072 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:07.329707 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:07.406785 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:07.407051 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:07.829299 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:07.833611 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:07.905862 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:07.906518 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:08.243240 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.097293811s)
	I0817 01:52:08.329812 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:08.338224 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:08.405779 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:08.407978 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:08.530852 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (1.316180136s)
	I0817 01:52:08.829006 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:08.834034 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:08.905993 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:08.906433 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:09.328255 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:09.333785 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:09.405657 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:09.405914 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:09.613886 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:09.829205 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:09.832931 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:09.905035 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:09.905962 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:10.328643 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:10.333042 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:10.404941 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:10.405901 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:10.829248 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:10.833275 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:10.905773 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:10.906291 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:11.328954 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:11.333012 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:11.409301 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:11.410066 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:11.614143 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:11.828872 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:11.833797 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:11.904929 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:11.905665 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:12.328367 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:12.333086 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:12.405384 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:12.405823 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:12.829376 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:12.833255 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:12.905024 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:12.905295 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:13.330689 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:13.338216 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:13.404972 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:13.406085 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:13.829177 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:13.832929 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:13.904662 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:13.905242 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:14.113342 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:14.328450 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:14.333455 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:14.404940 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:14.405321 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:14.827779 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:14.832993 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:14.905252 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:14.905259 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:15.328264 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:15.332934 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:15.404658 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:15.405224 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:15.828486 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:15.833605 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:15.904727 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:15.905383 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:16.328197 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:16.332914 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:16.405192 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:16.405977 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:16.613508 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:16.828234 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:16.833383 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:16.904446 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:16.905357 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:17.327749 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:17.337646 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:17.404755 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:17.405248 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:17.827645 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:17.832968 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:17.904120 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:17.905322 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:18.328032 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:18.332346 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:18.405262 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:18.405850 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:18.828047 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:18.833667 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:18.906070 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:18.906612 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:19.112711 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:19.327808 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:19.332949 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:19.404756 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:19.404964 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:19.828001 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:19.833437 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:19.904449 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:19.904977 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:20.327656 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:20.333295 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:20.404715 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:20.405667 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:20.828390 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:20.833214 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:20.905458 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:20.905671 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:21.328312 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:21.333138 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:21.405037 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:21.406170 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:21.612764 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:21.944477 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:21.946682 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:21.947605 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:21.947754 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:22.328433 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:22.333541 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:22.404285 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:22.405669 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:22.827511 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:22.833159 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:22.905254 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:22.905581 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:23.328750 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:23.333436 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:23.404313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:23.405077 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:23.613578 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:23.828253 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:23.832694 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:23.904993 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:23.905761 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:24.328880 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:24.333313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:24.404520 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:24.404733 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:24.828322 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:24.833601 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:24.905217 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:24.905274 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:25.330911 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:25.337306 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:25.404857 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:25.405921 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:25.832639 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:25.835193 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:25.905020 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:25.905738 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:26.112693 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:26.327937 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:26.333091 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:26.405361 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:26.405698 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:26.828674 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:26.833006 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:26.905177 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:26.906093 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:27.337144 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:27.338231 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:27.406265 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:27.406265 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:27.828866 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:27.833010 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:27.904570 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:27.905457 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:28.112963 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:28.328408 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:28.333808 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:28.404888 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:28.405625 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:28.828928 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:28.833221 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:28.905969 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:28.906240 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:29.330142 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:29.334291 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:29.404551 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:29.405831 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:29.837438 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:29.838402 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:29.905810 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:29.905987 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:30.113285 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:30.328348 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:30.332925 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:30.405080 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:30.405351 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:30.828025 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:30.832792 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:30.905180 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:30.905627 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:31.328284 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:31.333115 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:31.405329 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:31.406706 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:31.828629 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:31.833620 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:31.908890 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:31.911347 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:32.113408 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:32.328824 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:32.333028 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:32.404805 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:32.405808 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:32.829223 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:32.833077 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:32.905936 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:32.906733 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:33.113600 1554672 node_ready.go:49] node "addons-20210817015042-1554185" has status "Ready":"True"
	I0817 01:52:33.113625 1554672 node_ready.go:38] duration metric: took 34.525011363s waiting for node "addons-20210817015042-1554185" to be "Ready" ...
	I0817 01:52:33.113634 1554672 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 01:52:33.122258 1554672 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:33.328105 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:33.333112 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:33.405131 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:33.406483 1554672 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0817 01:52:33.406499 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:33.828753 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:33.833308 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:33.905785 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:33.906578 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:34.328900 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:34.333293 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:34.405074 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:34.405422 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:34.829036 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:34.844069 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:34.907082 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:34.907261 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:35.133323 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:35.329658 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:35.340946 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:35.406005 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:35.406344 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:35.828964 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:35.836081 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:35.905164 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:35.905926 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:36.328635 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:36.333208 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:36.406327 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:36.406693 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:36.828912 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:36.834669 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:36.906233 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:36.907548 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:37.328276 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:37.333065 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:37.443517 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:37.443853 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:37.633378 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:37.829201 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:37.833434 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:37.906518 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:37.906857 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:38.329240 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:38.333317 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:38.408662 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:38.409011 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:38.828315 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:38.837240 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:38.904802 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:38.906525 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:39.329255 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:39.346113 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:39.413436 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:39.413760 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:39.634418 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:39.828371 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:39.833885 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:39.905904 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:39.906262 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:40.328884 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:40.333598 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:40.405309 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:40.407193 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:40.828938 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:40.833697 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:40.905855 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:40.906245 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:41.329054 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:41.334180 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:41.404918 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:41.406327 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:41.828350 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:41.833549 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:41.905158 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:41.905842 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:42.193681 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:42.328599 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:42.335515 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:42.405022 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:42.405819 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:42.828942 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:42.833740 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:42.905762 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:42.905954 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:43.328334 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:43.333885 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:43.415938 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:43.416337 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:43.829129 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:43.839083 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:43.905165 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:43.905905 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:44.328646 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:44.333389 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:44.404851 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:44.406163 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:44.634366 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:52:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:44.828620 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:44.833682 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:44.905712 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:44.909482 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:45.328143 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:45.332944 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:45.406611 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:45.407338 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:45.828634 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:45.832978 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:45.904648 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:45.905363 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:46.328711 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:46.333910 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:46.405839 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:46.406763 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:46.635221 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:52:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:46.828340 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:46.833989 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:46.905252 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:46.906215 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:47.328332 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:47.333832 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:47.405675 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:47.407973 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:47.827969 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:47.833906 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:47.907574 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:47.912357 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:48.328701 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:48.333127 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:48.406135 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:48.406524 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:48.636787 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:48.828308 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:48.833330 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:48.906467 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:48.906683 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:49.328422 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:49.333598 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:49.405055 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:49.405237 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:49.828698 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:49.833563 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:49.905439 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:49.905671 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:50.329000 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:50.335089 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:50.406085 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:50.407525 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:50.829299 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:50.833324 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:50.906506 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:50.906956 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:51.134950 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:51.327885 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:51.333409 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:51.405287 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:51.406140 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:51.829287 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:51.834079 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:51.905595 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:51.906917 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:52.136668 1554672 pod_ready.go:92] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.136691 1554672 pod_ready.go:81] duration metric: took 19.014386562s waiting for pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.136717 1554672 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.140525 1554672 pod_ready.go:92] pod "etcd-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.140545 1554672 pod_ready.go:81] duration metric: took 3.820392ms waiting for pod "etcd-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.140557 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.144374 1554672 pod_ready.go:92] pod "kube-apiserver-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.144391 1554672 pod_ready.go:81] duration metric: took 3.805ms waiting for pod "kube-apiserver-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.144400 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.147997 1554672 pod_ready.go:92] pod "kube-controller-manager-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.148018 1554672 pod_ready.go:81] duration metric: took 3.596018ms waiting for pod "kube-controller-manager-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.148027 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-88pjl" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.151630 1554672 pod_ready.go:92] pod "kube-proxy-88pjl" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.151645 1554672 pod_ready.go:81] duration metric: took 3.612895ms waiting for pod "kube-proxy-88pjl" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.151654 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.328964 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:52.333708 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:52.405187 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:52.406370 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:52.533532 1554672 pod_ready.go:92] pod "kube-scheduler-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.533558 1554672 pod_ready.go:81] duration metric: took 381.895022ms waiting for pod "kube-scheduler-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.533568 1554672 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.829155 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:52.839844 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:52.905885 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:52.906272 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:53.346344 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:53.352796 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:53.409937 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:53.410482 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:53.834056 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:53.834773 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:53.907214 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:53.907172 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:54.331399 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:54.336335 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:54.407048 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:54.410847 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:54.829058 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:54.833883 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:54.905684 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:54.906829 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:54.944019 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:55.328849 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:55.333455 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:55.406435 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:55.408050 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:55.834184 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:55.836250 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:55.907784 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:55.908229 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:56.340402 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:56.341855 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:56.405913 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:56.406308 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:56.829718 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:56.840586 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:56.908288 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:56.908568 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:56.948818 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:57.328503 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:57.334462 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:57.406776 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:57.407190 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:57.828588 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:57.833847 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:57.905081 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:57.906429 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:58.329593 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:58.335086 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:58.405528 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:58.406555 1554672 kapi.go:108] duration metric: took 51.552974836s to wait for kubernetes.io/minikube-addons=registry ...
	I0817 01:52:58.829266 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:58.833517 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:58.905974 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:59.342609 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:59.348252 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:59.405313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:59.444841 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:59.828685 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:59.833928 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:59.905309 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:00.328962 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:00.333845 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:00.405313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:00.829039 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:00.834166 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:00.904823 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:01.328747 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:01.334336 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:01.404643 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:01.829758 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:01.835420 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:01.905318 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:01.945948 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:53:02.376424 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:02.377873 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:02.404990 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:02.828812 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:02.833383 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:02.904641 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:03.329032 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:03.337245 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:03.406764 1554672 kapi.go:108] duration metric: took 56.548723137s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0817 01:53:03.408669 1554672 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-20210817015042-1554185 cluster.
	I0817 01:53:03.410521 1554672 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0817 01:53:03.412326 1554672 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0817 01:53:03.448173 1554672 pod_ready.go:92] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"True"
	I0817 01:53:03.448196 1554672 pod_ready.go:81] duration metric: took 10.914620384s waiting for pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace to be "Ready" ...
	I0817 01:53:03.448215 1554672 pod_ready.go:38] duration metric: took 30.334547327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 01:53:03.448235 1554672 api_server.go:50] waiting for apiserver process to appear ...
	I0817 01:53:03.448250 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 01:53:03.448304 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 01:53:03.564171 1554672 cri.go:76] found id: "eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:03.564232 1554672 cri.go:76] found id: ""
	I0817 01:53:03.564250 1554672 logs.go:270] 1 containers: [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8]
	I0817 01:53:03.564343 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.575403 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 01:53:03.575484 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 01:53:03.604432 1554672 cri.go:76] found id: "29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:03.604494 1554672 cri.go:76] found id: ""
	I0817 01:53:03.604513 1554672 logs.go:270] 1 containers: [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4]
	I0817 01:53:03.604561 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.607149 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 01:53:03.607215 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 01:53:03.632895 1554672 cri.go:76] found id: "13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:03.632908 1554672 cri.go:76] found id: ""
	I0817 01:53:03.632913 1554672 logs.go:270] 1 containers: [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012]
	I0817 01:53:03.632967 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.635372 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 01:53:03.635435 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 01:53:03.664635 1554672 cri.go:76] found id: "615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:03.664650 1554672 cri.go:76] found id: ""
	I0817 01:53:03.664655 1554672 logs.go:270] 1 containers: [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13]
	I0817 01:53:03.664689 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.667197 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 01:53:03.667270 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 01:53:03.691527 1554672 cri.go:76] found id: "0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:03.691545 1554672 cri.go:76] found id: ""
	I0817 01:53:03.691550 1554672 logs.go:270] 1 containers: [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c]
	I0817 01:53:03.691582 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.693995 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 01:53:03.694060 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 01:53:03.717435 1554672 cri.go:76] found id: ""
	I0817 01:53:03.717475 1554672 logs.go:270] 0 containers: []
	W0817 01:53:03.717489 1554672 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 01:53:03.717495 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 01:53:03.717533 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 01:53:03.741717 1554672 cri.go:76] found id: "783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:03.741734 1554672 cri.go:76] found id: ""
	I0817 01:53:03.741739 1554672 logs.go:270] 1 containers: [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892]
	I0817 01:53:03.741798 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.744804 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 01:53:03.744851 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 01:53:03.771775 1554672 cri.go:76] found id: "52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:03.771789 1554672 cri.go:76] found id: ""
	I0817 01:53:03.771794 1554672 logs.go:270] 1 containers: [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393]
	I0817 01:53:03.771831 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.774470 1554672 logs.go:123] Gathering logs for etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] ...
	I0817 01:53:03.774489 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:03.801776 1554672 logs.go:123] Gathering logs for coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] ...
	I0817 01:53:03.801798 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:03.837058 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:03.840579 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:03.843933 1554672 logs.go:123] Gathering logs for kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] ...
	I0817 01:53:03.843957 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:03.898510 1554672 logs.go:123] Gathering logs for kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] ...
	I0817 01:53:03.898538 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:03.952593 1554672 logs.go:123] Gathering logs for kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] ...
	I0817 01:53:03.952621 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:04.082990 1554672 logs.go:123] Gathering logs for containerd ...
	I0817 01:53:04.083052 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0817 01:53:04.223853 1554672 logs.go:123] Gathering logs for kubelet ...
	I0817 01:53:04.223887 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 01:53:04.331965 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:04.338761 1554672 logs.go:123] Gathering logs for dmesg ...
	I0817 01:53:04.340534 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 01:53:04.342392 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:04.357212 1554672 logs.go:123] Gathering logs for describe nodes ...
	I0817 01:53:04.357263 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 01:53:04.694598 1554672 logs.go:123] Gathering logs for kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] ...
	I0817 01:53:04.694717 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:04.828761 1554672 logs.go:123] Gathering logs for storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] ...
	I0817 01:53:04.828816 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:04.851348 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:04.852551 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:04.876644 1554672 logs.go:123] Gathering logs for container status ...
	I0817 01:53:04.876688 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 01:53:05.331362 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:05.343522 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:05.831960 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:05.841720 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:06.329544 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:06.334286 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:06.829369 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:06.833923 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:07.328774 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:07.334115 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:07.467368 1554672 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 01:53:07.487646 1554672 api_server.go:70] duration metric: took 1m9.278576044s to wait for apiserver process to appear ...
	I0817 01:53:07.487700 1554672 api_server.go:86] waiting for apiserver healthz status ...
	I0817 01:53:07.487733 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 01:53:07.487806 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 01:53:07.534592 1554672 cri.go:76] found id: "eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:07.534644 1554672 cri.go:76] found id: ""
	I0817 01:53:07.534661 1554672 logs.go:270] 1 containers: [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8]
	I0817 01:53:07.534726 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.538672 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 01:53:07.538745 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 01:53:07.572611 1554672 cri.go:76] found id: "29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:07.572657 1554672 cri.go:76] found id: ""
	I0817 01:53:07.572674 1554672 logs.go:270] 1 containers: [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4]
	I0817 01:53:07.572739 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.576722 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 01:53:07.576801 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 01:53:07.611541 1554672 cri.go:76] found id: "13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:07.611559 1554672 cri.go:76] found id: ""
	I0817 01:53:07.611564 1554672 logs.go:270] 1 containers: [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012]
	I0817 01:53:07.611627 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.614311 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 01:53:07.614389 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 01:53:07.641823 1554672 cri.go:76] found id: "615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:07.641859 1554672 cri.go:76] found id: ""
	I0817 01:53:07.641864 1554672 logs.go:270] 1 containers: [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13]
	I0817 01:53:07.641897 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.644712 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 01:53:07.644770 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 01:53:07.667773 1554672 cri.go:76] found id: "0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:07.667788 1554672 cri.go:76] found id: ""
	I0817 01:53:07.667793 1554672 logs.go:270] 1 containers: [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c]
	I0817 01:53:07.667831 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.670409 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 01:53:07.670478 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 01:53:07.695746 1554672 cri.go:76] found id: ""
	I0817 01:53:07.695763 1554672 logs.go:270] 0 containers: []
	W0817 01:53:07.695768 1554672 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 01:53:07.695784 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 01:53:07.695828 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 01:53:07.727549 1554672 cri.go:76] found id: "783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:07.727592 1554672 cri.go:76] found id: ""
	I0817 01:53:07.727608 1554672 logs.go:270] 1 containers: [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892]
	I0817 01:53:07.727672 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.731096 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 01:53:07.731168 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 01:53:07.758719 1554672 cri.go:76] found id: "52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:07.758734 1554672 cri.go:76] found id: ""
	I0817 01:53:07.758739 1554672 logs.go:270] 1 containers: [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393]
	I0817 01:53:07.758787 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.761946 1554672 logs.go:123] Gathering logs for kubelet ...
	I0817 01:53:07.761964 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 01:53:07.830586 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:07.834021 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:07.863604 1554672 logs.go:123] Gathering logs for coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] ...
	I0817 01:53:07.863626 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:07.887301 1554672 logs.go:123] Gathering logs for kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] ...
	I0817 01:53:07.887356 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:07.918171 1554672 logs.go:123] Gathering logs for containerd ...
	I0817 01:53:07.918195 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0817 01:53:08.012682 1554672 logs.go:123] Gathering logs for container status ...
	I0817 01:53:08.012712 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 01:53:08.059071 1554672 logs.go:123] Gathering logs for kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] ...
	I0817 01:53:08.059126 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:08.163276 1554672 logs.go:123] Gathering logs for dmesg ...
	I0817 01:53:08.163302 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 01:53:08.176772 1554672 logs.go:123] Gathering logs for describe nodes ...
	I0817 01:53:08.176790 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 01:53:08.330227 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:08.344515 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:08.425430 1554672 logs.go:123] Gathering logs for kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] ...
	I0817 01:53:08.425453 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:08.486450 1554672 logs.go:123] Gathering logs for etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] ...
	I0817 01:53:08.486475 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:08.515454 1554672 logs.go:123] Gathering logs for kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] ...
	I0817 01:53:08.515475 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:08.542038 1554672 logs.go:123] Gathering logs for storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] ...
	I0817 01:53:08.542057 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:08.828741 1554672 kapi.go:108] duration metric: took 1m7.524156223s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0817 01:53:08.834977 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:09.335143 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:09.835186 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:10.335936 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:10.834892 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:11.068088 1554672 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 01:53:11.076771 1554672 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 01:53:11.077605 1554672 api_server.go:139] control plane version: v1.21.3
	I0817 01:53:11.077645 1554672 api_server.go:129] duration metric: took 3.589928004s to wait for apiserver health ...
	I0817 01:53:11.077667 1554672 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 01:53:11.077694 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 01:53:11.077770 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 01:53:11.134012 1554672 cri.go:76] found id: "eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:11.134030 1554672 cri.go:76] found id: ""
	I0817 01:53:11.134035 1554672 logs.go:270] 1 containers: [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8]
	I0817 01:53:11.134081 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.136813 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 01:53:11.136882 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 01:53:11.158746 1554672 cri.go:76] found id: "29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:11.158763 1554672 cri.go:76] found id: ""
	I0817 01:53:11.158768 1554672 logs.go:270] 1 containers: [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4]
	I0817 01:53:11.158868 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.161890 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 01:53:11.161955 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 01:53:11.185618 1554672 cri.go:76] found id: "13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:11.185638 1554672 cri.go:76] found id: ""
	I0817 01:53:11.185643 1554672 logs.go:270] 1 containers: [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012]
	I0817 01:53:11.185698 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.188273 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 01:53:11.188341 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 01:53:11.212061 1554672 cri.go:76] found id: "615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:11.212084 1554672 cri.go:76] found id: ""
	I0817 01:53:11.212104 1554672 logs.go:270] 1 containers: [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13]
	I0817 01:53:11.212154 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.214710 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 01:53:11.214777 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 01:53:11.254063 1554672 cri.go:76] found id: "0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:11.254080 1554672 cri.go:76] found id: ""
	I0817 01:53:11.254086 1554672 logs.go:270] 1 containers: [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c]
	I0817 01:53:11.254150 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.257322 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 01:53:11.257386 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 01:53:11.280677 1554672 cri.go:76] found id: ""
	I0817 01:53:11.280719 1554672 logs.go:270] 0 containers: []
	W0817 01:53:11.280735 1554672 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 01:53:11.280749 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 01:53:11.280792 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 01:53:11.302301 1554672 cri.go:76] found id: "783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:11.302344 1554672 cri.go:76] found id: ""
	I0817 01:53:11.302359 1554672 logs.go:270] 1 containers: [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892]
	I0817 01:53:11.302405 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.305069 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 01:53:11.305128 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 01:53:11.334791 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:11.337025 1554672 cri.go:76] found id: "52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:11.337041 1554672 cri.go:76] found id: ""
	I0817 01:53:11.337046 1554672 logs.go:270] 1 containers: [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393]
	I0817 01:53:11.337097 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.340390 1554672 logs.go:123] Gathering logs for kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] ...
	I0817 01:53:11.340407 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:11.377298 1554672 logs.go:123] Gathering logs for storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] ...
	I0817 01:53:11.377344 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:11.408451 1554672 logs.go:123] Gathering logs for containerd ...
	I0817 01:53:11.408473 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0817 01:53:11.514559 1554672 logs.go:123] Gathering logs for container status ...
	I0817 01:53:11.514589 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 01:53:11.567396 1554672 logs.go:123] Gathering logs for kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] ...
	I0817 01:53:11.567423 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:11.625821 1554672 logs.go:123] Gathering logs for etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] ...
	I0817 01:53:11.625847 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:11.652282 1554672 logs.go:123] Gathering logs for coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] ...
	I0817 01:53:11.652306 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:11.675002 1554672 logs.go:123] Gathering logs for kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] ...
	I0817 01:53:11.675047 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:11.697704 1554672 logs.go:123] Gathering logs for kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] ...
	I0817 01:53:11.697724 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:11.745590 1554672 logs.go:123] Gathering logs for kubelet ...
	I0817 01:53:11.745611 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 01:53:11.836311 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:11.837956 1554672 logs.go:123] Gathering logs for dmesg ...
	I0817 01:53:11.837993 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 01:53:11.865409 1554672 logs.go:123] Gathering logs for describe nodes ...
	I0817 01:53:11.865430 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 01:53:12.335417 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:12.834938 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:13.335078 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:13.835219 1554672 kapi.go:108] duration metric: took 1m6.512977174s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0817 01:53:13.838858 1554672 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, volumesnapshots, olm, registry, gcp-auth, ingress, csi-hostpath-driver
	I0817 01:53:13.838918 1554672 addons.go:344] enableAddons completed in 1m15.630038865s
	I0817 01:53:14.513128 1554672 system_pods.go:59] 18 kube-system pods found
	I0817 01:53:14.513163 1554672 system_pods.go:61] "coredns-558bd4d5db-sxct6" [954e41e3-b5a4-4fa8-8926-fa9f53507414] Running
	I0817 01:53:14.513169 1554672 system_pods.go:61] "csi-hostpath-attacher-0" [ae96aa4a-3965-45a9-8e94-db9fee5bcae1] Running
	I0817 01:53:14.513174 1554672 system_pods.go:61] "csi-hostpath-provisioner-0" [e6cedc35-32e7-4be9-907a-063eaa25f07d] Running
	I0817 01:53:14.513178 1554672 system_pods.go:61] "csi-hostpath-resizer-0" [9c8b4283-1ee1-45a6-ace0-e0096867592a] Running
	I0817 01:53:14.513183 1554672 system_pods.go:61] "csi-hostpath-snapshotter-0" [d23b3d96-e125-4669-a694-40c25a9ca2bc] Running
	I0817 01:53:14.513189 1554672 system_pods.go:61] "csi-hostpathplugin-0" [50373dc3-be79-4049-b2f4-e19bb0a79c10] Running
	I0817 01:53:14.513193 1554672 system_pods.go:61] "etcd-addons-20210817015042-1554185" [b0a759e2-33c6-486b-aee8-e1019669fb12] Running
	I0817 01:53:14.513200 1554672 system_pods.go:61] "kindnet-xp2kn" [234e19f8-3cdd-4c44-9dff-290f932bba79] Running
	I0817 01:53:14.513205 1554672 system_pods.go:61] "kube-apiserver-addons-20210817015042-1554185" [3704ee0c-53da-4106-b407-9c6829a74921] Running
	I0817 01:53:14.513215 1554672 system_pods.go:61] "kube-controller-manager-addons-20210817015042-1554185" [a7e9992d-6083-4141-a778-7ab31067cb40] Running
	I0817 01:53:14.513220 1554672 system_pods.go:61] "kube-proxy-88pjl" [3152779f-8eaa-4982-8a07-a39f7c215086] Running
	I0817 01:53:14.513225 1554672 system_pods.go:61] "kube-scheduler-addons-20210817015042-1554185" [3e4bab6f-a0a1-46e8-83dd-f7b11f4e9d62] Running
	I0817 01:53:14.513229 1554672 system_pods.go:61] "metrics-server-77c99ccb96-x8mh4" [634199c2-0567-4fb7-b5d6-2205aa4fcfd4] Running
	I0817 01:53:14.513238 1554672 system_pods.go:61] "registry-9np4b" [1b228b1c-c9df-4c36-a0f4-2a2fb6ec967b] Running
	I0817 01:53:14.513247 1554672 system_pods.go:61] "registry-proxy-p5xh8" [0a7638e5-9e17-4626-aeb9-b7fe2abe695d] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 01:53:14.513257 1554672 system_pods.go:61] "snapshot-controller-989f9ddc8-rcswn" [0d6290bd-2e2d-4e14-b299-f4b14ea2de3b] Running
	I0817 01:53:14.513264 1554672 system_pods.go:61] "snapshot-controller-989f9ddc8-zqgfr" [7ceff31f-ed10-44dd-8c0f-7063e87beadc] Running
	I0817 01:53:14.513274 1554672 system_pods.go:61] "storage-provisioner" [3f4cb2a6-c88b-486f-bba7-cef64ca39e9a] Running
	I0817 01:53:14.513279 1554672 system_pods.go:74] duration metric: took 3.43559739s to wait for pod list to return data ...
	I0817 01:53:14.513290 1554672 default_sa.go:34] waiting for default service account to be created ...
	I0817 01:53:14.515707 1554672 default_sa.go:45] found service account: "default"
	I0817 01:53:14.515727 1554672 default_sa.go:55] duration metric: took 2.432583ms for default service account to be created ...
	I0817 01:53:14.515734 1554672 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 01:53:14.523274 1554672 system_pods.go:86] 18 kube-system pods found
	I0817 01:53:14.523301 1554672 system_pods.go:89] "coredns-558bd4d5db-sxct6" [954e41e3-b5a4-4fa8-8926-fa9f53507414] Running
	I0817 01:53:14.523309 1554672 system_pods.go:89] "csi-hostpath-attacher-0" [ae96aa4a-3965-45a9-8e94-db9fee5bcae1] Running
	I0817 01:53:14.523314 1554672 system_pods.go:89] "csi-hostpath-provisioner-0" [e6cedc35-32e7-4be9-907a-063eaa25f07d] Running
	I0817 01:53:14.523324 1554672 system_pods.go:89] "csi-hostpath-resizer-0" [9c8b4283-1ee1-45a6-ace0-e0096867592a] Running
	I0817 01:53:14.523332 1554672 system_pods.go:89] "csi-hostpath-snapshotter-0" [d23b3d96-e125-4669-a694-40c25a9ca2bc] Running
	I0817 01:53:14.523338 1554672 system_pods.go:89] "csi-hostpathplugin-0" [50373dc3-be79-4049-b2f4-e19bb0a79c10] Running
	I0817 01:53:14.523346 1554672 system_pods.go:89] "etcd-addons-20210817015042-1554185" [b0a759e2-33c6-486b-aee8-e1019669fb12] Running
	I0817 01:53:14.523351 1554672 system_pods.go:89] "kindnet-xp2kn" [234e19f8-3cdd-4c44-9dff-290f932bba79] Running
	I0817 01:53:14.523364 1554672 system_pods.go:89] "kube-apiserver-addons-20210817015042-1554185" [3704ee0c-53da-4106-b407-9c6829a74921] Running
	I0817 01:53:14.523369 1554672 system_pods.go:89] "kube-controller-manager-addons-20210817015042-1554185" [a7e9992d-6083-4141-a778-7ab31067cb40] Running
	I0817 01:53:14.523377 1554672 system_pods.go:89] "kube-proxy-88pjl" [3152779f-8eaa-4982-8a07-a39f7c215086] Running
	I0817 01:53:14.523382 1554672 system_pods.go:89] "kube-scheduler-addons-20210817015042-1554185" [3e4bab6f-a0a1-46e8-83dd-f7b11f4e9d62] Running
	I0817 01:53:14.523391 1554672 system_pods.go:89] "metrics-server-77c99ccb96-x8mh4" [634199c2-0567-4fb7-b5d6-2205aa4fcfd4] Running
	I0817 01:53:14.523396 1554672 system_pods.go:89] "registry-9np4b" [1b228b1c-c9df-4c36-a0f4-2a2fb6ec967b] Running
	I0817 01:53:14.523405 1554672 system_pods.go:89] "registry-proxy-p5xh8" [0a7638e5-9e17-4626-aeb9-b7fe2abe695d] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 01:53:14.523414 1554672 system_pods.go:89] "snapshot-controller-989f9ddc8-rcswn" [0d6290bd-2e2d-4e14-b299-f4b14ea2de3b] Running
	I0817 01:53:14.523429 1554672 system_pods.go:89] "snapshot-controller-989f9ddc8-zqgfr" [7ceff31f-ed10-44dd-8c0f-7063e87beadc] Running
	I0817 01:53:14.523434 1554672 system_pods.go:89] "storage-provisioner" [3f4cb2a6-c88b-486f-bba7-cef64ca39e9a] Running
	I0817 01:53:14.523439 1554672 system_pods.go:126] duration metric: took 7.700756ms to wait for k8s-apps to be running ...
	I0817 01:53:14.523449 1554672 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 01:53:14.523496 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 01:53:14.532286 1554672 system_svc.go:56] duration metric: took 8.834069ms WaitForService to wait for kubelet.
	I0817 01:53:14.532341 1554672 kubeadm.go:547] duration metric: took 1m16.323273553s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 01:53:14.532368 1554672 node_conditions.go:102] verifying NodePressure condition ...
	I0817 01:53:14.535572 1554672 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 01:53:14.535600 1554672 node_conditions.go:123] node cpu capacity is 2
	I0817 01:53:14.535613 1554672 node_conditions.go:105] duration metric: took 3.24014ms to run NodePressure ...
	I0817 01:53:14.535627 1554672 start.go:231] waiting for startup goroutines ...
	I0817 01:53:14.849964 1554672 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 01:53:14.851936 1554672 out.go:177] * Done! kubectl is now configured to use "addons-20210817015042-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID
	5eebb444a6ef1       04bd8b4e0d303       4 minutes ago       Running             task-pv-container                        0                   4ff6bc2905fc1
	6f369eecd6011       d544402579747       4 minutes ago       Exited              catalog-operator                         7                   d7bf81a0ac291
	2de95bad668aa       d544402579747       4 minutes ago       Exited              olm-operator                             7                   3745d022afa82
	fcf13ae398afa       1611cd07b61d5       10 minutes ago      Running             busybox                                  0                   f1d131a615a5f
	79b64e9292026       ab63026e5f864       14 minutes ago      Running             liveness-probe                           0                   e1b67cc269ffc
	86d072c7d6f7f       f8f69c8b53974       14 minutes ago      Running             hostpath                                 0                   e1b67cc269ffc
	95f12ea0ee9f4       1f46a863d2aa9       14 minutes ago      Running             node-driver-registrar                    0                   e1b67cc269ffc
	9803a3ca0028f       bac9ddccb0c70       14 minutes ago      Running             controller                               0                   8a5c5f5789f9b
	d76b5b43f143e       b4df90000e547       14 minutes ago      Running             csi-external-health-monitor-controller   0                   e1b67cc269ffc
	33a3dc8565cfc       69724f415cab8       15 minutes ago      Running             csi-attacher                             0                   0ae516c846f1f
	7cf8bb6cdcfe2       a883f7fc35610       15 minutes ago      Exited              patch                                    0                   8703f481d86b7
	b2165d1abb5e5       a883f7fc35610       15 minutes ago      Exited              create                                   0                   95b081ab37530
	c3eb735c4bd3e       d65cad97e5f05       15 minutes ago      Running             csi-snapshotter                          0                   558825437a764
	3af33a1255a45       03c15ec36e257       15 minutes ago      Running             csi-provisioner                          0                   cb89551723f57
	2b02be61418e6       63f120615f44b       15 minutes ago      Running             csi-external-health-monitor-agent        0                   e1b67cc269ffc
	1b06e793319cd       3758cfc26c6db       15 minutes ago      Running             volume-snapshot-controller               0                   c81fe44186720
	ecf9efd7a3f01       803606888e0b1       15 minutes ago      Running             csi-resizer                              0                   f99c5fda234ab
	783c0958684bd       ba04bb24b9575       15 minutes ago      Running             storage-provisioner                      0                   0cde084873a62
	13be13e3410ac       1a1f05a2cd7c2       15 minutes ago      Running             coredns                                  0                   00cb17ddd7f4a
	6fe738b9a8dba       3758cfc26c6db       15 minutes ago      Running             volume-snapshot-controller               0                   d0b05273cbb65
	7b33a9bf5802e       f37b7c809e5dc       16 minutes ago      Running             kindnet-cni                              0                   96dbe7c3048af
	0483eb703ed0f       4ea38350a1beb       16 minutes ago      Running             kube-proxy                               0                   f0918af3dc71f
	eacccd844ca10       44a6d50ef170d       16 minutes ago      Running             kube-apiserver                           0                   a18344960e958
	615d16acf0dc7       31a3b96cefc1e       16 minutes ago      Running             kube-scheduler                           0                   99c49ff38f4e8
	29af4eb3039bc       05b738aa1bc63       16 minutes ago      Running             etcd                                     0                   c6d8e2c4d15ca
	52a4c60d098e5       cb310ff289d79       16 minutes ago      Running             kube-controller-manager                  0                   437a86afaf37b
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 01:50:50 UTC, end at Tue 2021-08-17 02:08:00 UTC. --
	Aug 17 02:03:57 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:57.879614167Z" level=info msg="StopPodSandbox for \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\" returns successfully"
	Aug 17 02:03:57 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:57.890930033Z" level=info msg="RemoveContainer for \"eb2360810df1ac87246f467960ae5f4f48f88e2d9520e5916cc5533a65753351\""
	Aug 17 02:03:57 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:57.902591627Z" level=info msg="RemoveContainer for \"eb2360810df1ac87246f467960ae5f4f48f88e2d9520e5916cc5533a65753351\" returns successfully"
	Aug 17 02:03:57 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:57.911020113Z" level=error msg="ContainerStatus for \"eb2360810df1ac87246f467960ae5f4f48f88e2d9520e5916cc5533a65753351\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb2360810df1ac87246f467960ae5f4f48f88e2d9520e5916cc5533a65753351\": not found"
	Aug 17 02:03:58 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:58.878969692Z" level=info msg="StopPodSandbox for \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\""
	Aug 17 02:03:58 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:58.911902385Z" level=info msg="TearDown network for sandbox \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\" successfully"
	Aug 17 02:03:58 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:58.911930372Z" level=info msg="StopPodSandbox for \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\" returns successfully"
	Aug 17 02:03:59 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:59.073632216Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:nginx,Uid:ed08e79b-1781-4708-8f29-d5b69cc3c7c6,Namespace:default,Attempt:0,}"
	Aug 17 02:03:59 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:59.150236875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be360c108dbda8ca7c6b59ecd7cb6cffb476ba94256bf81ebbedc94430717c00 pid=14165
	Aug 17 02:03:59 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:59.217627315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx,Uid:ed08e79b-1781-4708-8f29-d5b69cc3c7c6,Namespace:default,Attempt:0,} returns sandbox id \"be360c108dbda8ca7c6b59ecd7cb6cffb476ba94256bf81ebbedc94430717c00\""
	Aug 17 02:03:59 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:59.218965862Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 17 02:04:00 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:00.115379884Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:04:13 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:13.897083154Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 17 02:04:14 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:14.822268171Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:04:41 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:41.897006103Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 17 02:04:42 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:42.858879672Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:04:53 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:53.047714555Z" level=info msg="StopPodSandbox for \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\""
	Aug 17 02:04:53 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:53.058229986Z" level=info msg="TearDown network for sandbox \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\" successfully"
	Aug 17 02:04:53 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:53.058262970Z" level=info msg="StopPodSandbox for \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\" returns successfully"
	Aug 17 02:04:53 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:53.058615656Z" level=info msg="RemovePodSandbox for \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\""
	Aug 17 02:04:53 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:53.064378708Z" level=info msg="RemovePodSandbox \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\" returns successfully"
	Aug 17 02:05:24 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:05:24.897357489Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 17 02:05:25 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:05:25.800770283Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:06:45 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:06:45.897130825Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 17 02:06:46 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:06:46.976756904Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	
	* 
	* ==> coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210817015042-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-20210817015042-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=addons-20210817015042-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T01_51_45_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210817015042-1554185
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210817015042-1554185"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 01:51:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210817015042-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 02:07:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 02:03:57 +0000   Tue, 17 Aug 2021 01:51:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 02:03:57 +0000   Tue, 17 Aug 2021 01:51:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 02:03:57 +0000   Tue, 17 Aug 2021 01:51:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 02:03:57 +0000   Tue, 17 Aug 2021 01:52:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210817015042-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                ace180e0-70a7-4178-bffd-233be0529698
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  default                     nginx                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  default                     task-pv-pod                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  ingress-nginx               ingress-nginx-controller-59b45fb494-d8wsj                100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         15m
	  kube-system                 coredns-558bd4d5db-sxct6                                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     16m
	  kube-system                 csi-hostpath-attacher-0                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 csi-hostpath-provisioner-0                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 csi-hostpath-resizer-0                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 csi-hostpath-snapshotter-0                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 csi-hostpathplugin-0                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 etcd-addons-20210817015042-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-xp2kn                                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      16m
	  kube-system                 kube-apiserver-addons-20210817015042-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-addons-20210817015042-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-88pjl                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-addons-20210817015042-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 snapshot-controller-989f9ddc8-rcswn                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 snapshot-controller-989f9ddc8-zqgfr                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  olm                         catalog-operator-75d496484d-86xl7                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (1%!)(MISSING)        0 (0%!)(MISSING)         15m
	  olm                         olm-operator-859c88c96-j28dd                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (2%!)(MISSING)       0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                970m (48%!)(MISSING)  100m (5%!)(MISSING)
	  memory             550Mi (7%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 16m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x4 over 16m)  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x3 over 16m)  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x3 over 16m)  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet     Node addons-20210817015042-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                15m                kubelet     Node addons-20210817015042-1554185 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug17 01:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] <==
	* 2021-08-17 02:04:12.861414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:04:22.861743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:04:32.861366 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:04:42.861685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:04:52.861156 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:05:02.861363 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:05:12.861600 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:05:22.861982 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:05:32.861550 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:05:42.860977 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:05:52.861108 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:06:02.861656 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:06:12.861959 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:06:22.861928 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:06:32.861012 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:06:36.806216 I | mvcc: store.index: compact 2122
	2021-08-17 02:06:36.821964 I | mvcc: finished scheduled compaction at 2122 (took 15.228804ms)
	2021-08-17 02:06:42.861690 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:06:52.861904 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:07:02.861689 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:07:12.861881 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:07:22.861287 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:07:32.861924 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:07:42.861847 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:07:52.861655 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  02:08:00 up  9:50,  0 users,  load average: 0.43, 0.55, 1.02
	Linux addons-20210817015042-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] <==
	* I0817 02:03:04.787437       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:03:04.787446       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:03:39.645013       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:03:39.645160       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:03:39.645180       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:03:58.370446       1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io
	I0817 02:04:06.032090       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0817 02:04:22.523416       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:04:22.523455       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:04:22.523463       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:04:56.908929       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:04:56.908972       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:04:56.909061       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:05:31.756406       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:05:31.756448       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:05:31.756457       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:06:11.763267       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:06:11.763305       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:06:11.763314       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:06:55.220595       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:06:55.220643       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:06:55.220652       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:07:37.179997       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:07:37.180041       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:07:37.180050       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] <==
	* I0817 01:52:27.209952       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
	I0817 01:52:27.209987       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com
	I0817 01:52:27.210060       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for installplans.operators.coreos.com
	I0817 01:52:27.211453       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0817 01:52:27.412457       1 shared_informer.go:247] Caches are synced for resource quota 
	W0817 01:52:27.565215       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 01:52:27.570191       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0817 01:52:27.585755       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0817 01:52:27.587067       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0817 01:52:27.788117       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 01:52:33.056834       1 event.go:291] "Event occurred" object="kube-system/registry-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: registry-proxy-p5xh8"
	E0817 01:52:33.075112       1 daemon_controller.go:320] kube-system/registry-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"registry-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"bbc76700-77ff-4df0-928a-e381ef3cf185", ResourceVersion:"486", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764761920, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "kubernetes.io/minikube-addons":"registry"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"kubernetes.io/minikube-addons\":\"registry\"},\"name\":\"regist
ry-proxy\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"kubernetes.io/minikube-addons\":\"registry\",\"registry-proxy\":\"true\"}},\"template\":{\"metadata\":{\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"kubernetes.io/minikube-addons\":\"registry\",\"registry-proxy\":\"true\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"REGISTRY_HOST\",\"value\":\"registry.kube-system.svc.cluster.local\"},{\"name\":\"REGISTRY_PORT\",\"value\":\"80\"}],\"image\":\"gcr.io/google_containers/kube-registry-proxy:0.4@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"registry-proxy\",\"ports\":[{\"containerPort\":80,\"hostPort\":5000,\"name\":\"registry\"}]}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d7e
000), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d7e018)}, v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d7e030), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d7e048)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001b3d3e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "kubernetes.io/minikube-addons":"registry", "registry-proxy":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(n
il), Containers:[]v1.Container{v1.Container{Name:"registry-proxy", Image:"gcr.io/google_containers/kube-registry-proxy:0.4@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"registry", HostPort:5000, ContainerPort:80, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"REGISTRY_HOST", Value:"registry.kube-system.svc.cluster.local", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"REGISTRY_PORT", Value:"80", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPre
sent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001d2d158), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004f7d50), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:
v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001d63790)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001d2d16c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "registry-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0817 01:52:36.883695       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0817 01:52:56.050693       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0817 01:52:56.851746       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0817 01:52:57.251384       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	E0817 01:52:57.435870       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0817 01:52:57.652302       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	W0817 01:52:57.808910       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 01:57:26.704983       1 tokens_controller.go:262] error synchronizing serviceaccount gcp-auth/default: secrets "default-token-zg7wn" is forbidden: unable to create new content in namespace gcp-auth because it is being terminated
	I0817 01:57:48.400672       1 event.go:291] "Event occurred" object="default/hpvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0817 01:57:48.797797       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-edf3d92e-1108-4adc-a8cd-37519395465d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^85c325d7-fefe-11eb-bd30-26acc1e90309") from node "addons-20210817015042-1554185" 
	I0817 01:57:49.345399       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume "pvc-edf3d92e-1108-4adc-a8cd-37519395465d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^85c325d7-fefe-11eb-bd30-26acc1e90309") from node "addons-20210817015042-1554185" 
	I0817 01:57:49.345604       1 event.go:291] "Event occurred" object="default/task-pv-pod" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-edf3d92e-1108-4adc-a8cd-37519395465d\" "
	I0817 01:57:53.078416       1 namespace_controller.go:185] Namespace has been deleted gcp-auth
	
	* 
	* ==> kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] <==
	* I0817 01:51:59.199305       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 01:51:59.199348       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 01:51:59.199381       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 01:51:59.228513       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 01:51:59.228548       1 server_others.go:212] Using iptables Proxier.
	I0817 01:51:59.228558       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 01:51:59.228568       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 01:51:59.229489       1 server.go:643] Version: v1.21.3
	I0817 01:51:59.234867       1 config.go:315] Starting service config controller
	I0817 01:51:59.234890       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 01:51:59.236683       1 config.go:224] Starting endpoint slice config controller
	I0817 01:51:59.236698       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 01:51:59.242351       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 01:51:59.243149       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 01:51:59.338912       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 01:51:59.338971       1 shared_informer.go:247] Caches are synced for service config 
	W0817 01:58:09.244582       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 02:05:11.245597       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] <==
	* W0817 01:51:41.468231       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 01:51:41.468338       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 01:51:41.468430       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 01:51:41.611648       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 01:51:41.615019       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 01:51:41.616612       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0817 01:51:41.616756       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 01:51:41.622145       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 01:51:41.624737       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 01:51:41.627800       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 01:51:41.628373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 01:51:41.628434       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 01:51:41.628492       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 01:51:41.628547       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 01:51:41.628600       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:41.628650       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:41.628702       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:41.628754       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 01:51:41.628805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 01:51:41.630964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 01:51:41.631026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:42.555258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 01:51:42.563233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 01:51:42.595603       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 01:51:44.616129       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 01:50:50 UTC, end at Tue 2021-08-17 02:08:00 UTC. --
	Aug 17 02:07:02 addons-20210817015042-1554185 kubelet[1147]: I0817 02:07:02.134371    1147 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi-hostpath/csi.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 17 02:07:02 addons-20210817015042-1554185 kubelet[1147]: I0817 02:07:02.134381    1147 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 17 02:07:02 addons-20210817015042-1554185 kubelet[1147]: I0817 02:07:02.134419    1147 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	Aug 17 02:07:07 addons-20210817015042-1554185 kubelet[1147]: I0817 02:07:07.896020    1147 scope.go:111] "RemoveContainer" containerID="6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504"
	Aug 17 02:07:07 addons-20210817015042-1554185 kubelet[1147]: E0817 02:07:07.896425    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 02:07:08 addons-20210817015042-1554185 kubelet[1147]: I0817 02:07:08.895930    1147 scope.go:111] "RemoveContainer" containerID="2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2"
	Aug 17 02:07:08 addons-20210817015042-1554185 kubelet[1147]: E0817 02:07:08.896439    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	Aug 17 02:07:10 addons-20210817015042-1554185 kubelet[1147]: E0817 02:07:10.896819    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx" podUID=ed08e79b-1781-4708-8f29-d5b69cc3c7c6
	Aug 17 02:07:19 addons-20210817015042-1554185 kubelet[1147]: I0817 02:07:19.895525    1147 scope.go:111] "RemoveContainer" containerID="2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2"
	Aug 17 02:07:19 addons-20210817015042-1554185 kubelet[1147]: E0817 02:07:19.895924    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	Aug 17 02:07:20 addons-20210817015042-1554185 kubelet[1147]: I0817 02:07:20.896133    1147 scope.go:111] "RemoveContainer" containerID="6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504"
	Aug 17 02:07:20 addons-20210817015042-1554185 kubelet[1147]: E0817 02:07:20.896894    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 02:07:25 addons-20210817015042-1554185 kubelet[1147]: E0817 02:07:25.896142    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx" podUID=ed08e79b-1781-4708-8f29-d5b69cc3c7c6
	Aug 17 02:07:30 addons-20210817015042-1554185 kubelet[1147]: I0817 02:07:30.896276    1147 scope.go:111] "RemoveContainer" containerID="2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2"
	Aug 17 02:07:30 addons-20210817015042-1554185 kubelet[1147]: E0817 02:07:30.896649    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	Aug 17 02:07:35 addons-20210817015042-1554185 kubelet[1147]: I0817 02:07:35.896444    1147 scope.go:111] "RemoveContainer" containerID="6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504"
	Aug 17 02:07:35 addons-20210817015042-1554185 kubelet[1147]: E0817 02:07:35.896852    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 02:07:37 addons-20210817015042-1554185 kubelet[1147]: E0817 02:07:37.897030    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx" podUID=ed08e79b-1781-4708-8f29-d5b69cc3c7c6
	Aug 17 02:07:43 addons-20210817015042-1554185 kubelet[1147]: I0817 02:07:43.895618    1147 scope.go:111] "RemoveContainer" containerID="2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2"
	Aug 17 02:07:43 addons-20210817015042-1554185 kubelet[1147]: E0817 02:07:43.896004    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	Aug 17 02:07:50 addons-20210817015042-1554185 kubelet[1147]: I0817 02:07:50.896441    1147 scope.go:111] "RemoveContainer" containerID="6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504"
	Aug 17 02:07:50 addons-20210817015042-1554185 kubelet[1147]: E0817 02:07:50.897171    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 02:07:50 addons-20210817015042-1554185 kubelet[1147]: E0817 02:07:50.897649    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx" podUID=ed08e79b-1781-4708-8f29-d5b69cc3c7c6
	Aug 17 02:07:57 addons-20210817015042-1554185 kubelet[1147]: I0817 02:07:57.895950    1147 scope.go:111] "RemoveContainer" containerID="2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2"
	Aug 17 02:07:57 addons-20210817015042-1554185 kubelet[1147]: E0817 02:07:57.896361    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	
	* 
	* ==> storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] <==
	* I0817 01:52:45.168349       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 01:52:45.223745       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 01:52:45.226921       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 01:52:45.243264       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 01:52:45.243748       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"41860dbd-59f4-40f3-b06c-d38f89989bf1", APIVersion:"v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210817015042-1554185_09f87493-b972-47f4-9131-044097fd5d01 became leader
	I0817 01:52:45.243789       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210817015042-1554185_09f87493-b972-47f4-9131-044097fd5d01!
	I0817 01:52:45.346906       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210817015042-1554185_09f87493-b972-47f4-9131-044097fd5d01!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210817015042-1554185 -n addons-20210817015042-1554185
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210817015042-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: nginx ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210817015042-1554185 describe pod nginx ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210817015042-1554185 describe pod nginx ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j: exit status 1 (98.534285ms)

                                                
                                                
-- stdout --
	Name:         nginx
	Namespace:    default
	Priority:     0
	Node:         addons-20210817015042-1554185/192.168.49.2
	Start Time:   Tue, 17 Aug 2021 02:03:58 +0000
	Labels:       run=nginx
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  nginx:
	    Container ID:   
	    Image:          nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nnxzp (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-nnxzp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m3s                  default-scheduler  Successfully assigned default/nginx to addons-20210817015042-1554185
	  Normal   Pulling    2m37s (x4 over 4m2s)  kubelet            Pulling image "nginx:alpine"
	  Warning  Failed     2m36s (x4 over 4m1s)  kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m36s (x4 over 4m1s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m8s (x6 over 4m1s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    116s (x7 over 4m1s)   kubelet            Back-off pulling image "nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-msw6w" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xpb6j" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210817015042-1554185 describe pod nginx ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j: exit status 1
--- FAIL: TestAddons/parallel/Ingress (243.42s)

                                                
                                    
x
+
TestAddons/parallel/Olm (732.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 32.117127ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:467: olm-operator stabilized in 34.086937ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:469: failed waiting for packageserver deployment to stabilize: timed out waiting for the condition
addons_test.go:471: packageserver stabilized in 6m0.035140622s
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...
helpers_test.go:343: "catalog-operator-75d496484d-86xl7" [ab50ee4e-7255-4a75-b1d3-6cf397a713a6] Running / Ready:ContainersNotReady (containers with unready status: [catalog-operator]) / ContainersReady:ContainersNotReady (containers with unready status: [catalog-operator])
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.006259976s
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...
helpers_test.go:343: "olm-operator-859c88c96-j28dd" [bc13a715-3c7d-486c-846e-64675afe63d0] Running / Ready:ContainersNotReady (containers with unready status: [olm-operator]) / ContainersReady:ContainersNotReady (containers with unready status: [olm-operator])
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.005656083s
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:479: ***** TestAddons/parallel/Olm: pod "app=packageserver" failed to start within 6m0s: timed out waiting for the condition ****
addons_test.go:479: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210817015042-1554185 -n addons-20210817015042-1554185
addons_test.go:479: TestAddons/parallel/Olm: showing logs for failed pods as of 2021-08-17 02:05:25.283231346 +0000 UTC m=+956.077979138
addons_test.go:480: failed waiting for pod packageserver: app=packageserver within 6m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Olm]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210817015042-1554185
helpers_test.go:236: (dbg) docker inspect addons-20210817015042-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416",
	        "Created": "2021-08-17T01:50:49.008425565Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1555108,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T01:50:49.513909075Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/hosts",
	        "LogPath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416-json.log",
	        "Name": "/addons-20210817015042-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210817015042-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210817015042-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61/merged",
	                "UpperDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61/diff",
	                "WorkDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-20210817015042-1554185",
	                "Source": "/var/lib/docker/volumes/addons-20210817015042-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210817015042-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210817015042-1554185",
	                "name.minikube.sigs.k8s.io": "addons-20210817015042-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3e0a22fba78ee7873eb198b4450cb747bf4f2dc90aa87985648e04a1bfa9520",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50314"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50313"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50310"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50312"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50311"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c3e0a22fba78",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210817015042-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d0219469219e",
	                        "addons-20210817015042-1554185"
	                    ],
	                    "NetworkID": "a9a617dbec2c4687c7bfc4bea262a36b8329d70029602dc944aed84d4dfb4f83",
	                    "EndpointID": "dad39de7953aad4709a05c2c9027de032d29f0302e6751762f5bb275759d2909",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-20210817015042-1554185 -n addons-20210817015042-1554185
helpers_test.go:245: <<< TestAddons/parallel/Olm FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Olm]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210817015042-1554185 logs -n 25
helpers_test.go:253: TestAddons/parallel/Olm logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                  | download-only-20210817014929-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:28 UTC | Tue, 17 Aug 2021 01:50:28 UTC |
	| delete  | -p                                     | download-only-20210817014929-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:28 UTC | Tue, 17 Aug 2021 01:50:28 UTC |
	|         | download-only-20210817014929-1554185   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-only-20210817014929-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:28 UTC | Tue, 17 Aug 2021 01:50:28 UTC |
	|         | download-only-20210817014929-1554185   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-docker-20210817015028-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:42 UTC | Tue, 17 Aug 2021 01:50:42 UTC |
	|         | download-docker-20210817015028-1554185 |                                        |         |         |                               |                               |
	| start   | -p                                     | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:43 UTC | Tue, 17 Aug 2021 01:53:14 UTC |
	|         | addons-20210817015042-1554185          |                                        |         |         |                               |                               |
	|         | --wait=true --memory=4000              |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --addons=registry                      |                                        |         |         |                               |                               |
	|         | --addons=metrics-server                |                                        |         |         |                               |                               |
	|         | --addons=olm                           |                                        |         |         |                               |                               |
	|         | --addons=volumesnapshots               |                                        |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver           |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --addons=ingress                       |                                        |         |         |                               |                               |
	|         | --addons=gcp-auth                      |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:54:25 UTC | Tue, 17 Aug 2021 01:54:25 UTC |
	|         | ip                                     |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:57:09 UTC | Tue, 17 Aug 2021 01:57:10 UTC |
	|         | addons disable registry                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:57:10 UTC | Tue, 17 Aug 2021 01:57:11 UTC |
	|         | logs -n 25                             |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:57:21 UTC | Tue, 17 Aug 2021 01:57:48 UTC |
	|         | addons disable gcp-auth                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:03:49 UTC | Tue, 17 Aug 2021 02:03:51 UTC |
	|         | logs -n 25                             |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:03:57 UTC | Tue, 17 Aug 2021 02:03:57 UTC |
	|         | addons disable metrics-server          |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 01:50:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 01:50:43.004283 1554672 out.go:298] Setting OutFile to fd 1 ...
	I0817 01:50:43.004408 1554672 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:50:43.004431 1554672 out.go:311] Setting ErrFile to fd 2...
	I0817 01:50:43.004441 1554672 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:50:43.004581 1554672 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 01:50:43.004871 1554672 out.go:305] Setting JSON to false
	I0817 01:50:43.005775 1554672 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34381,"bootTime":1629130662,"procs":416,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 01:50:43.005843 1554672 start.go:121] virtualization:  
	I0817 01:50:43.008113 1554672 out.go:177] * [addons-20210817015042-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 01:50:43.010059 1554672 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 01:50:43.009081 1554672 notify.go:169] Checking for updates...
	I0817 01:50:43.011571 1554672 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 01:50:43.013130 1554672 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 01:50:43.014848 1554672 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 01:50:43.015025 1554672 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 01:50:43.049197 1554672 docker.go:132] docker version: linux-20.10.8
	I0817 01:50:43.049279 1554672 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:50:43.144133 1554672 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:50:43.088038469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:50:43.144227 1554672 docker.go:244] overlay module found
	I0817 01:50:43.146324 1554672 out.go:177] * Using the docker driver based on user configuration
	I0817 01:50:43.146348 1554672 start.go:278] selected driver: docker
	I0817 01:50:43.146353 1554672 start.go:751] validating driver "docker" against <nil>
	I0817 01:50:43.146367 1554672 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 01:50:43.146408 1554672 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 01:50:43.146423 1554672 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 01:50:43.147842 1554672 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 01:50:43.148132 1554672 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:50:43.222251 1554672 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:50:43.17341921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:50:43.222365 1554672 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0817 01:50:43.222521 1554672 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 01:50:43.222542 1554672 cni.go:93] Creating CNI manager for ""
	I0817 01:50:43.222549 1554672 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:50:43.222565 1554672 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 01:50:43.222570 1554672 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 01:50:43.222582 1554672 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 01:50:43.222589 1554672 start_flags.go:277] config:
	{Name:addons-20210817015042-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 01:50:43.224429 1554672 out.go:177] * Starting control plane node addons-20210817015042-1554185 in cluster addons-20210817015042-1554185
	I0817 01:50:43.224467 1554672 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 01:50:43.226166 1554672 out.go:177] * Pulling base image ...
	I0817 01:50:43.226186 1554672 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:50:43.226218 1554672 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 01:50:43.226230 1554672 cache.go:56] Caching tarball of preloaded images
	I0817 01:50:43.226359 1554672 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 01:50:43.226380 1554672 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 01:50:43.226662 1554672 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/config.json ...
	I0817 01:50:43.226688 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/config.json: {Name:mk832a7647425177a5f2be8874629457bb58883b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:50:43.226846 1554672 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 01:50:43.267020 1554672 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 01:50:43.267048 1554672 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 01:50:43.267060 1554672 cache.go:205] Successfully downloaded all kic artifacts
	I0817 01:50:43.267095 1554672 start.go:313] acquiring machines lock for addons-20210817015042-1554185: {Name:mkc848aa47e63f497fa6d048b39bc33e9d106216 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 01:50:43.267208 1554672 start.go:317] acquired machines lock for "addons-20210817015042-1554185" in 92.061µs
	I0817 01:50:43.267235 1554672 start.go:89] Provisioning new machine with config: &{Name:addons-20210817015042-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 01:50:43.267309 1554672 start.go:126] createHost starting for "" (driver="docker")
	I0817 01:50:43.269344 1554672 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0817 01:50:43.269558 1554672 start.go:160] libmachine.API.Create for "addons-20210817015042-1554185" (driver="docker")
	I0817 01:50:43.269585 1554672 client.go:168] LocalClient.Create starting
	I0817 01:50:43.269667 1554672 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0817 01:50:43.834992 1554672 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0817 01:50:44.271080 1554672 cli_runner.go:115] Run: docker network inspect addons-20210817015042-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 01:50:44.298072 1554672 cli_runner.go:162] docker network inspect addons-20210817015042-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 01:50:44.298133 1554672 network_create.go:255] running [docker network inspect addons-20210817015042-1554185] to gather additional debugging logs...
	I0817 01:50:44.298149 1554672 cli_runner.go:115] Run: docker network inspect addons-20210817015042-1554185
	W0817 01:50:44.324372 1554672 cli_runner.go:162] docker network inspect addons-20210817015042-1554185 returned with exit code 1
	I0817 01:50:44.324396 1554672 network_create.go:258] error running [docker network inspect addons-20210817015042-1554185]: docker network inspect addons-20210817015042-1554185: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210817015042-1554185
	I0817 01:50:44.324409 1554672 network_create.go:260] output of [docker network inspect addons-20210817015042-1554185]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210817015042-1554185
	
	** /stderr **
	I0817 01:50:44.324473 1554672 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 01:50:44.351093 1554672 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x40005be280] misses:0}
	I0817 01:50:44.351140 1554672 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 01:50:44.351162 1554672 network_create.go:106] attempt to create docker network addons-20210817015042-1554185 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 01:50:44.351211 1554672 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210817015042-1554185
	I0817 01:50:44.413803 1554672 network_create.go:90] docker network addons-20210817015042-1554185 192.168.49.0/24 created
	I0817 01:50:44.413829 1554672 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210817015042-1554185" container
	I0817 01:50:44.413892 1554672 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0817 01:50:44.440106 1554672 cli_runner.go:115] Run: docker volume create addons-20210817015042-1554185 --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --label created_by.minikube.sigs.k8s.io=true
	I0817 01:50:44.467518 1554672 oci.go:102] Successfully created a docker volume addons-20210817015042-1554185
	I0817 01:50:44.467581 1554672 cli_runner.go:115] Run: docker run --rm --name addons-20210817015042-1554185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --entrypoint /usr/bin/test -v addons-20210817015042-1554185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0817 01:50:48.841251 1554672 cli_runner.go:168] Completed: docker run --rm --name addons-20210817015042-1554185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --entrypoint /usr/bin/test -v addons-20210817015042-1554185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (4.373634594s)
	I0817 01:50:48.841276 1554672 oci.go:106] Successfully prepared a docker volume addons-20210817015042-1554185
	W0817 01:50:48.841301 1554672 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0817 01:50:48.841310 1554672 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0817 01:50:48.841360 1554672 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 01:50:48.841549 1554672 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:50:48.841570 1554672 kic.go:179] Starting extracting preloaded images to volume ...
	I0817 01:50:48.841627 1554672 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210817015042-1554185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 01:50:48.971581 1554672 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210817015042-1554185 --name addons-20210817015042-1554185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210817015042-1554185 --network addons-20210817015042-1554185 --ip 192.168.49.2 --volume addons-20210817015042-1554185:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0817 01:50:49.523596 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Running}}
	I0817 01:50:49.590786 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:50:49.633119 1554672 cli_runner.go:115] Run: docker exec addons-20210817015042-1554185 stat /var/lib/dpkg/alternatives/iptables
	I0817 01:50:49.741896 1554672 oci.go:278] the created container "addons-20210817015042-1554185" has a running status.
	I0817 01:50:49.741921 1554672 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa...
	I0817 01:50:50.532064 1554672 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 01:50:50.667778 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:50:50.707368 1554672 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 01:50:50.707384 1554672 kic_runner.go:115] Args: [docker exec --privileged addons-20210817015042-1554185 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 01:51:00.466206 1554672 kic_runner.go:124] Done: [docker exec --privileged addons-20210817015042-1554185 chown docker:docker /home/docker/.ssh/authorized_keys]: (9.758798263s)
	I0817 01:51:02.783214 1554672 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210817015042-1554185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (13.941553277s)
	I0817 01:51:02.783245 1554672 kic.go:188] duration metric: took 13.941672 seconds to extract preloaded images to volume
	I0817 01:51:02.783324 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:02.814748 1554672 machine.go:88] provisioning docker machine ...
	I0817 01:51:02.814781 1554672 ubuntu.go:169] provisioning hostname "addons-20210817015042-1554185"
	I0817 01:51:02.814865 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:02.842333 1554672 main.go:130] libmachine: Using SSH client type: native
	I0817 01:51:02.842498 1554672 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50314 <nil> <nil>}
	I0817 01:51:02.842516 1554672 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210817015042-1554185 && echo "addons-20210817015042-1554185" | sudo tee /etc/hostname
	I0817 01:51:02.970606 1554672 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210817015042-1554185
	
	I0817 01:51:02.970693 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:02.999373 1554672 main.go:130] libmachine: Using SSH client type: native
	I0817 01:51:02.999533 1554672 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50314 <nil> <nil>}
	I0817 01:51:02.999560 1554672 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210817015042-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210817015042-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210817015042-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 01:51:03.114034 1554672 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 01:51:03.114055 1554672 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 01:51:03.114074 1554672 ubuntu.go:177] setting up certificates
	I0817 01:51:03.114082 1554672 provision.go:83] configureAuth start
	I0817 01:51:03.114135 1554672 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210817015042-1554185
	I0817 01:51:03.141579 1554672 provision.go:138] copyHostCerts
	I0817 01:51:03.141653 1554672 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 01:51:03.141736 1554672 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 01:51:03.141784 1554672 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 01:51:03.141822 1554672 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.addons-20210817015042-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210817015042-1554185]
	I0817 01:51:03.398920 1554672 provision.go:172] copyRemoteCerts
	I0817 01:51:03.398968 1554672 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 01:51:03.399007 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.426820 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.508566 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 01:51:03.525114 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0817 01:51:03.539071 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 01:51:03.553109 1554672 provision.go:86] duration metric: configureAuth took 439.012307ms
	I0817 01:51:03.553124 1554672 ubuntu.go:193] setting minikube options for container-runtime
	I0817 01:51:03.553268 1554672 config.go:177] Loaded profile config "addons-20210817015042-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 01:51:03.553275 1554672 machine.go:91] provisioned docker machine in 738.505134ms
	I0817 01:51:03.553280 1554672 client.go:171] LocalClient.Create took 20.283690224s
	I0817 01:51:03.553289 1554672 start.go:168] duration metric: libmachine.API.Create for "addons-20210817015042-1554185" took 20.283731225s
	I0817 01:51:03.553296 1554672 start.go:267] post-start starting for "addons-20210817015042-1554185" (driver="docker")
	I0817 01:51:03.553301 1554672 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 01:51:03.553340 1554672 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 01:51:03.553372 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.581866 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.664711 1554672 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 01:51:03.667021 1554672 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 01:51:03.667044 1554672 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 01:51:03.667055 1554672 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 01:51:03.667073 1554672 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 01:51:03.667081 1554672 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 01:51:03.667131 1554672 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 01:51:03.667155 1554672 start.go:270] post-start completed in 113.85344ms
	I0817 01:51:03.667437 1554672 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210817015042-1554185
	I0817 01:51:03.695177 1554672 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/config.json ...
	I0817 01:51:03.695366 1554672 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 01:51:03.695414 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.722965 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.802744 1554672 start.go:129] duration metric: createHost completed in 20.535424588s
	I0817 01:51:03.802761 1554672 start.go:80] releasing machines lock for "addons-20210817015042-1554185", held for 20.535539837s
	I0817 01:51:03.802834 1554672 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210817015042-1554185
	I0817 01:51:03.830388 1554672 ssh_runner.go:149] Run: systemctl --version
	I0817 01:51:03.830437 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.830658 1554672 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 01:51:03.830713 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.864441 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.872939 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.950680 1554672 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 01:51:04.148514 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 01:51:04.156921 1554672 docker.go:153] disabling docker service ...
	I0817 01:51:04.156964 1554672 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 01:51:04.172287 1554672 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 01:51:04.180567 1554672 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 01:51:04.253873 1554672 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 01:51:04.337794 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 01:51:04.346079 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 01:51:04.356986 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 01:51:04.369213 1554672 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 01:51:04.375739 1554672 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 01:51:04.381264 1554672 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 01:51:04.455762 1554672 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 01:51:04.531663 1554672 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 01:51:04.531729 1554672 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 01:51:04.535130 1554672 start.go:413] Will wait 60s for crictl version
	I0817 01:51:04.535189 1554672 ssh_runner.go:149] Run: sudo crictl version
	I0817 01:51:04.564551 1554672 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T01:51:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 01:51:15.611398 1554672 ssh_runner.go:149] Run: sudo crictl version
	I0817 01:51:15.634965 1554672 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 01:51:15.635034 1554672 ssh_runner.go:149] Run: containerd --version
	I0817 01:51:15.656211 1554672 ssh_runner.go:149] Run: containerd --version
	I0817 01:51:15.679165 1554672 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 01:51:15.679262 1554672 cli_runner.go:115] Run: docker network inspect addons-20210817015042-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 01:51:15.708112 1554672 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 01:51:15.711074 1554672 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 01:51:15.720057 1554672 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:51:15.720115 1554672 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 01:51:15.753630 1554672 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 01:51:15.753654 1554672 containerd.go:517] Images already preloaded, skipping extraction
	I0817 01:51:15.753696 1554672 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 01:51:15.775284 1554672 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 01:51:15.775306 1554672 cache_images.go:74] Images are preloaded, skipping loading
	I0817 01:51:15.775376 1554672 ssh_runner.go:149] Run: sudo crictl info
	I0817 01:51:15.796264 1554672 cni.go:93] Creating CNI manager for ""
	I0817 01:51:15.796286 1554672 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:51:15.796297 1554672 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 01:51:15.796310 1554672 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210817015042-1554185 NodeName:addons-20210817015042-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 01:51:15.796446 1554672 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "addons-20210817015042-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 01:51:15.796533 1554672 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-20210817015042-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 01:51:15.796591 1554672 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 01:51:15.802721 1554672 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 01:51:15.802788 1554672 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 01:51:15.808456 1554672 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (574 bytes)
	I0817 01:51:15.819782 1554672 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 01:51:15.830993 1554672 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0817 01:51:15.841895 1554672 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 01:51:15.844431 1554672 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 01:51:15.852834 1554672 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185 for IP: 192.168.49.2
	I0817 01:51:15.852892 1554672 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 01:51:16.232897 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt ...
	I0817 01:51:16.232924 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt: {Name:mkc452a3ca463d1cef7aa1398b1abd9dddd24545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.233112 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key ...
	I0817 01:51:16.233129 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key: {Name:mkb1c0cc6e35e952c8fa312da56d58ae26957187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.233218 1554672 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 01:51:16.929155 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt ...
	I0817 01:51:16.929187 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt: {Name:mk17a5a660a62b953e570d93eac621069f930efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.929368 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key ...
	I0817 01:51:16.929384 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key: {Name:mk40bf80fb6d166c627fea37bd45ce901649a411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.929516 1554672 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.key
	I0817 01:51:16.929537 1554672 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt with IP's: []
	I0817 01:51:17.141841 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt ...
	I0817 01:51:17.141869 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: {Name:mk127978c85cd8b22e7e4466afd86c3104950f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.142041 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.key ...
	I0817 01:51:17.142056 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.key: {Name:mk9c80a73b58e8a5fc9e3f4aca38da7b4d098319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.142143 1554672 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2
	I0817 01:51:17.142152 1554672 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 01:51:17.697755 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2 ...
	I0817 01:51:17.697786 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2: {Name:mk68739490e6778fecd80380c013c3c92d6d4458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.698773 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2 ...
	I0817 01:51:17.698790 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2: {Name:mk844eac0cbe48c9235e9d8a8ec3aa0d9a836734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.698894 1554672 certs.go:308] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt
	I0817 01:51:17.698954 1554672 certs.go:312] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key
	I0817 01:51:17.699002 1554672 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key
	I0817 01:51:17.699012 1554672 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt with IP's: []
	I0817 01:51:18.551109 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt ...
	I0817 01:51:18.551144 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt: {Name:mkb606f4652991a4936ad1fb4f336e911d7af05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:18.551327 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key ...
	I0817 01:51:18.551342 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key: {Name:mkddf28d3df3bc53b2858cabdc2cbc08941228fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:18.551516 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 01:51:18.551557 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 01:51:18.551586 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 01:51:18.551613 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 01:51:18.554164 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 01:51:18.569715 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 01:51:18.584425 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 01:51:18.598873 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 01:51:18.613294 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 01:51:18.627638 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 01:51:18.642450 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 01:51:18.657110 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 01:51:18.671462 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 01:51:18.686137 1554672 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 01:51:18.696974 1554672 ssh_runner.go:149] Run: openssl version
	I0817 01:51:18.701232 1554672 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 01:51:18.707560 1554672 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 01:51:18.710430 1554672 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 01:51:18.710492 1554672 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 01:51:18.714912 1554672 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 01:51:18.721735 1554672 kubeadm.go:390] StartCluster: {Name:addons-20210817015042-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 01:51:18.721819 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 01:51:18.721874 1554672 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 01:51:18.749686 1554672 cri.go:76] found id: ""
	I0817 01:51:18.749758 1554672 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 01:51:18.755843 1554672 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 01:51:18.761633 1554672 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 01:51:18.761681 1554672 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 01:51:18.767360 1554672 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 01:51:18.767403 1554672 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 01:51:44.489308 1554672 out.go:204]   - Generating certificates and keys ...
	I0817 01:51:44.492258 1554672 out.go:204]   - Booting up control plane ...
	I0817 01:51:44.495405 1554672 out.go:204]   - Configuring RBAC rules ...
	I0817 01:51:44.497771 1554672 cni.go:93] Creating CNI manager for ""
	I0817 01:51:44.497802 1554672 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:51:44.499744 1554672 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 01:51:44.499923 1554672 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 01:51:44.514536 1554672 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 01:51:44.514555 1554672 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 01:51:44.537490 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 01:51:45.293207 1554672 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 01:51:45.293283 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:45.293354 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=addons-20210817015042-1554185 minikube.k8s.io/updated_at=2021_08_17T01_51_45_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:45.441142 1554672 ops.go:34] apiserver oom_adj: -16
	I0817 01:51:45.441307 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:46.028917 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:46.528512 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:47.028526 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:47.529129 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:48.028453 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:48.529207 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:49.029151 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:49.528902 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:50.028980 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:50.528509 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:51.028493 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:51.528957 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:52.028487 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:52.529123 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:53.029078 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:53.528513 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:54.029046 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:54.529488 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:55.029473 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:55.529461 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:56.029173 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:56.529368 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:57.028522 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:57.528583 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:57.667234 1554672 kubeadm.go:985] duration metric: took 12.373989422s to wait for elevateKubeSystemPrivileges.
	I0817 01:51:57.667260 1554672 kubeadm.go:392] StartCluster complete in 38.945530358s
	I0817 01:51:57.667277 1554672 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:57.667387 1554672 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 01:51:57.667820 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:58.208648 1554672 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210817015042-1554185" rescaled to 1
	I0817 01:51:58.208753 1554672 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 01:51:58.211352 1554672 out.go:177] * Verifying Kubernetes components...
	I0817 01:51:58.211399 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 01:51:58.208813 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 01:51:58.208883 1554672 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I0817 01:51:58.211533 1554672 addons.go:59] Setting volumesnapshots=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.211549 1554672 addons.go:135] Setting addon volumesnapshots=true in "addons-20210817015042-1554185"
	I0817 01:51:58.211576 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.212086 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.212136 1554672 addons.go:59] Setting ingress=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.212151 1554672 addons.go:135] Setting addon ingress=true in "addons-20210817015042-1554185"
	I0817 01:51:58.212176 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.212566 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.212695 1554672 addons.go:59] Setting metrics-server=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.212709 1554672 addons.go:135] Setting addon metrics-server=true in "addons-20210817015042-1554185"
	I0817 01:51:58.212725 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.213112 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.213170 1554672 addons.go:59] Setting olm=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.213183 1554672 addons.go:135] Setting addon olm=true in "addons-20210817015042-1554185"
	I0817 01:51:58.213199 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.213584 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.213633 1554672 addons.go:59] Setting registry=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.213647 1554672 addons.go:135] Setting addon registry=true in "addons-20210817015042-1554185"
	I0817 01:51:58.213662 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.214028 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.214076 1554672 addons.go:59] Setting storage-provisioner=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.214086 1554672 addons.go:135] Setting addon storage-provisioner=true in "addons-20210817015042-1554185"
	W0817 01:51:58.214091 1554672 addons.go:147] addon storage-provisioner should already be in state true
	I0817 01:51:58.214110 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.214476 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.214533 1554672 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.214556 1554672 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210817015042-1554185"
	I0817 01:51:58.214578 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.216636 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.209046 1554672 config.go:177] Loaded profile config "addons-20210817015042-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 01:51:58.236087 1554672 addons.go:59] Setting default-storageclass=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.236110 1554672 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210817015042-1554185"
	I0817 01:51:58.236416 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.236497 1554672 addons.go:59] Setting gcp-auth=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.236511 1554672 mustload.go:65] Loading cluster: addons-20210817015042-1554185
	I0817 01:51:58.236645 1554672 config.go:177] Loaded profile config "addons-20210817015042-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 01:51:58.236850 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.386147 1554672 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0817 01:51:58.387928 1554672 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0817 01:51:58.390873 1554672 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0817 01:51:58.390923 1554672 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0817 01:51:58.390932 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0817 01:51:58.390988 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.537862 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0817 01:51:58.539571 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0817 01:51:58.541491 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0817 01:51:58.545257 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0817 01:51:58.547105 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0817 01:51:58.548521 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0817 01:51:58.548575 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0817 01:51:58.548588 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0817 01:51:58.550068 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0817 01:51:58.548640 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.554947 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0817 01:51:58.556578 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0817 01:51:58.558141 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0817 01:51:58.558186 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0817 01:51:58.558193 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0817 01:51:58.558233 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.584190 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:58.586499 1554672 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0817 01:51:58.586556 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 01:51:58.586564 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 01:51:58.586607 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.586985 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 01:51:58.588594 1554672 node_ready.go:35] waiting up to 6m0s for node "addons-20210817015042-1554185" to be "Ready" ...
	I0817 01:51:58.646058 1554672 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0817 01:51:58.647847 1554672 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0817 01:51:58.697505 1554672 out.go:177]   - Using image registry:2.7.1
	I0817 01:51:58.699108 1554672 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0817 01:51:58.699188 1554672 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0817 01:51:58.699196 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0817 01:51:58.699248 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.738566 1554672 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 01:51:58.738649 1554672 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 01:51:58.738662 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 01:51:58.738710 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.776023 1554672 addons.go:135] Setting addon default-storageclass=true in "addons-20210817015042-1554185"
	W0817 01:51:58.776048 1554672 addons.go:147] addon default-storageclass should already be in state true
	I0817 01:51:58.776074 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.776526 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.805011 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.875835 1554672 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0817 01:51:58.875902 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0817 01:51:58.876004 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.903641 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:58.916757 1554672 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0817 01:51:58.916831 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.922593 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:58.927465 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.028544 1554672 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 01:51:59.028564 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 01:51:59.028615 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:59.050931 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.052785 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.078548 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.103856 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.136455 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.164633 1554672 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0817 01:51:59.164654 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0817 01:51:59.294730 1554672 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0817 01:51:59.295256 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0817 01:51:59.334238 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0817 01:51:59.362229 1554672 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0817 01:51:59.362285 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0817 01:51:59.419723 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 01:51:59.419773 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0817 01:51:59.430396 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 01:51:59.439975 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0817 01:51:59.440022 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0817 01:51:59.457816 1554672 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0817 01:51:59.457862 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0817 01:51:59.478937 1554672 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0817 01:51:59.484531 1554672 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0817 01:51:59.484544 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0817 01:51:59.492438 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 01:51:59.516776 1554672 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0817 01:51:59.516819 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0817 01:51:59.533786 1554672 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0817 01:51:59.533830 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0817 01:51:59.538721 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 01:51:59.538765 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 01:51:59.551896 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0817 01:51:59.551933 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0817 01:51:59.577593 1554672 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0817 01:51:59.577637 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0817 01:51:59.586368 1554672 addons.go:135] Setting addon gcp-auth=true in "addons-20210817015042-1554185"
	I0817 01:51:59.586439 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:59.586992 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:59.602216 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0817 01:51:59.643183 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0817 01:51:59.643200 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0817 01:51:59.643758 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 01:51:59.643772 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 01:51:59.653214 1554672 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.6
	I0817 01:51:59.654750 1554672 out.go:177]   - Using image jettech/kube-webhook-certgen:v1.3.0
	I0817 01:51:59.654796 1554672 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0817 01:51:59.654803 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0817 01:51:59.654918 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:59.662952 1554672 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.075944471s)
	I0817 01:51:59.662971 1554672 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0817 01:51:59.675174 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0817 01:51:59.683436 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0817 01:51:59.683451 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0817 01:51:59.698631 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0817 01:51:59.698646 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0817 01:51:59.718898 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.744738 1554672 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 01:51:59.744759 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0817 01:51:59.753993 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0817 01:51:59.754011 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0817 01:51:59.768957 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 01:51:59.806478 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0817 01:51:59.806499 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0817 01:51:59.841542 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 01:51:59.896907 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0817 01:51:59.896928 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0817 01:52:00.017093 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0817 01:52:00.017114 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0817 01:52:00.098144 1554672 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0817 01:52:00.098165 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (770 bytes)
	I0817 01:52:00.148128 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0817 01:52:00.148150 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0817 01:52:00.212978 1554672 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 01:52:00.213000 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4755 bytes)
	I0817 01:52:00.240295 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0817 01:52:00.240316 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0817 01:52:00.328162 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 01:52:00.392480 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0817 01:52:00.392504 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0817 01:52:00.475797 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 01:52:00.475819 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0817 01:52:00.588665 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 01:52:00.613870 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:01.300912 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.870464987s)
	I0817 01:52:01.300955 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (1.966695194s)
	I0817 01:52:01.300964 1554672 addons.go:313] Verifying addon ingress=true in "addons-20210817015042-1554185"
	I0817 01:52:01.302793 1554672 out.go:177] * Verifying ingress addon...
	I0817 01:52:01.301217 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.808764624s)
	I0817 01:52:01.304580 1554672 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0817 01:52:01.324823 1554672 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0817 01:52:01.324869 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:01.866150 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:02.455106 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:02.756616 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:02.904020 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:03.374970 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:03.900784 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:04.389210 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:04.828604 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:05.113059 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:05.328501 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:05.828619 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:06.329237 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:06.849401 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.17419898s)
	I0817 01:52:06.849430 1554672 addons.go:313] Verifying addon registry=true in "addons-20210817015042-1554185"
	I0817 01:52:06.851610 1554672 out.go:177] * Verifying registry addon...
	I0817 01:52:06.849712 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.080729919s)
	I0817 01:52:06.849894 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (7.247660472s)
	I0817 01:52:06.850006 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.008434003s)
	I0817 01:52:06.850075 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (6.521890577s)
	I0817 01:52:06.853580 1554672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0817 01:52:06.853656 1554672 addons.go:313] Verifying addon metrics-server=true in "addons-20210817015042-1554185"
	W0817 01:52:06.853699 1554672 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0817 01:52:06.853907 1554672 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	W0817 01:52:06.853735 1554672 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0817 01:52:06.853958 1554672 retry.go:31] will retry after 291.140013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0817 01:52:06.853773 1554672 addons.go:313] Verifying addon gcp-auth=true in "addons-20210817015042-1554185"
	I0817 01:52:06.856170 1554672 out.go:177] * Verifying gcp-auth addon...
	I0817 01:52:06.858037 1554672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0817 01:52:06.879437 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:06.901505 1554672 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0817 01:52:06.901521 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:06.902116 1554672 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0817 01:52:06.902127 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:07.114493 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:07.145764 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 01:52:07.214608 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0817 01:52:07.318415 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.729707072s)
	I0817 01:52:07.318482 1554672 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210817015042-1554185"
	I0817 01:52:07.320343 1554672 out.go:177] * Verifying csi-hostpath-driver addon...
	I0817 01:52:07.322240 1554672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0817 01:52:07.329026 1554672 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0817 01:52:07.329072 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:07.329707 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:07.406785 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:07.407051 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:07.829299 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:07.833611 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:07.905862 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:07.906518 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:08.243240 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.097293811s)
	I0817 01:52:08.329812 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:08.338224 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:08.405779 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:08.407978 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:08.530852 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (1.316180136s)
	I0817 01:52:08.829006 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:08.834034 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:08.905993 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:08.906433 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:09.328255 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:09.333785 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:09.405657 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:09.405914 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:09.613886 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:09.829205 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:09.832931 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:09.905035 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:09.905962 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:10.328643 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:10.333042 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:10.404941 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:10.405901 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:10.829248 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:10.833275 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:10.905773 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:10.906291 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:11.328954 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:11.333012 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:11.409301 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:11.410066 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:11.614143 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:11.828872 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:11.833797 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:11.904929 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:11.905665 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:12.328367 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:12.333086 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:12.405384 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:12.405823 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:12.829376 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:12.833255 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:12.905024 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:12.905295 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:13.330689 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:13.338216 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:13.404972 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:13.406085 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:13.829177 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:13.832929 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:13.904662 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:13.905242 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:14.113342 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:14.328450 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:14.333455 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:14.404940 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:14.405321 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:14.827779 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:14.832993 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:14.905252 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:14.905259 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:15.328264 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:15.332934 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:15.404658 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:15.405224 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:15.828486 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:15.833605 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:15.904727 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:15.905383 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:16.328197 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:16.332914 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:16.405192 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:16.405977 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:16.613508 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:16.828234 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:16.833383 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:16.904446 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:16.905357 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:17.327749 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:17.337646 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:17.404755 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:17.405248 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:17.827645 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:17.832968 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:17.904120 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:17.905322 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:18.328032 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:18.332346 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:18.405262 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:18.405850 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:18.828047 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:18.833667 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:18.906070 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:18.906612 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:19.112711 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:19.327808 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:19.332949 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:19.404756 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:19.404964 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:19.828001 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:19.833437 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:19.904449 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:19.904977 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:20.327656 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:20.333295 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:20.404715 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:20.405667 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:20.828390 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:20.833214 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:20.905458 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:20.905671 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:21.328312 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:21.333138 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:21.405037 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:21.406170 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:21.612764 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:21.944477 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:21.946682 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:21.947605 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:21.947754 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:22.328433 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:22.333541 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:22.404285 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:22.405669 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:22.827511 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:22.833159 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:22.905254 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:22.905581 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:23.328750 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:23.333436 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:23.404313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:23.405077 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:23.613578 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:23.828253 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:23.832694 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:23.904993 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:23.905761 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:24.328880 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:24.333313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:24.404520 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:24.404733 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:24.828322 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:24.833601 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:24.905217 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:24.905274 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:25.330911 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:25.337306 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:25.404857 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:25.405921 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:25.832639 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:25.835193 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:25.905020 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:25.905738 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:26.112693 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:26.327937 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:26.333091 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:26.405361 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:26.405698 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:26.828674 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:26.833006 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:26.905177 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:26.906093 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:27.337144 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:27.338231 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:27.406265 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:27.406265 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:27.828866 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:27.833010 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:27.904570 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:27.905457 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:28.112963 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:28.328408 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:28.333808 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:28.404888 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:28.405625 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:28.828928 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:28.833221 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:28.905969 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:28.906240 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:29.330142 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:29.334291 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:29.404551 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:29.405831 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:29.837438 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:29.838402 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:29.905810 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:29.905987 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:30.113285 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:30.328348 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:30.332925 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:30.405080 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:30.405351 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:30.828025 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:30.832792 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:30.905180 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:30.905627 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:31.328284 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:31.333115 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:31.405329 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:31.406706 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:31.828629 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:31.833620 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:31.908890 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:31.911347 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:32.113408 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:32.328824 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:32.333028 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:32.404805 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:32.405808 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:32.829223 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:32.833077 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:32.905936 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:32.906733 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:33.113600 1554672 node_ready.go:49] node "addons-20210817015042-1554185" has status "Ready":"True"
	I0817 01:52:33.113625 1554672 node_ready.go:38] duration metric: took 34.525011363s waiting for node "addons-20210817015042-1554185" to be "Ready" ...
	I0817 01:52:33.113634 1554672 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 01:52:33.122258 1554672 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:33.328105 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:33.333112 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:33.405131 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:33.406483 1554672 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0817 01:52:33.406499 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:33.828753 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:33.833308 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:33.905785 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:33.906578 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:34.328900 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:34.333293 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:34.405074 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:34.405422 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:34.829036 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:34.844069 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:34.907082 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:34.907261 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:35.133323 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:35.329658 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:35.340946 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:35.406005 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:35.406344 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:35.828964 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:35.836081 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:35.905164 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:35.905926 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:36.328635 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:36.333208 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:36.406327 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:36.406693 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:36.828912 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:36.834669 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:36.906233 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:36.907548 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:37.328276 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:37.333065 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:37.443517 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:37.443853 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:37.633378 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:37.829201 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:37.833434 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:37.906518 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:37.906857 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:38.329240 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:38.333317 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:38.408662 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:38.409011 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:38.828315 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:38.837240 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:38.904802 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:38.906525 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:39.329255 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:39.346113 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:39.413436 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:39.413760 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:39.634418 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:39.828371 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:39.833885 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:39.905904 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:39.906262 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:40.328884 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:40.333598 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:40.405309 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:40.407193 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:40.828938 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:40.833697 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:40.905855 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:40.906245 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:41.329054 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:41.334180 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:41.404918 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:41.406327 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:41.828350 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:41.833549 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:41.905158 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:41.905842 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:42.193681 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:42.328599 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:42.335515 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:42.405022 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:42.405819 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:42.828942 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:42.833740 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:42.905762 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:42.905954 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:43.328334 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:43.333885 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:43.415938 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:43.416337 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:43.829129 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:43.839083 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:43.905165 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:43.905905 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:44.328646 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:44.333389 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:44.404851 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:44.406163 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:44.634366 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:52:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:44.828620 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:44.833682 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:44.905712 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:44.909482 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:45.328143 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:45.332944 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:45.406611 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:45.407338 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:45.828634 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:45.832978 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:45.904648 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:45.905363 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:46.328711 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:46.333910 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:46.405839 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:46.406763 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:46.635221 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:52:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:46.828340 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:46.833989 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:46.905252 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:46.906215 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:47.328332 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:47.333832 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:47.405675 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:47.407973 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:47.827969 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:47.833906 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:47.907574 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:47.912357 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:48.328701 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:48.333127 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:48.406135 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:48.406524 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:48.636787 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:48.828308 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:48.833330 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:48.906467 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:48.906683 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:49.328422 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:49.333598 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:49.405055 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:49.405237 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:49.828698 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:49.833563 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:49.905439 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:49.905671 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:50.329000 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:50.335089 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:50.406085 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:50.407525 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:50.829299 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:50.833324 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:50.906506 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:50.906956 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:51.134950 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:51.327885 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:51.333409 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:51.405287 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:51.406140 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:51.829287 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:51.834079 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:51.905595 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:51.906917 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:52.136668 1554672 pod_ready.go:92] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.136691 1554672 pod_ready.go:81] duration metric: took 19.014386562s waiting for pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.136717 1554672 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.140525 1554672 pod_ready.go:92] pod "etcd-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.140545 1554672 pod_ready.go:81] duration metric: took 3.820392ms waiting for pod "etcd-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.140557 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.144374 1554672 pod_ready.go:92] pod "kube-apiserver-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.144391 1554672 pod_ready.go:81] duration metric: took 3.805ms waiting for pod "kube-apiserver-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.144400 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.147997 1554672 pod_ready.go:92] pod "kube-controller-manager-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.148018 1554672 pod_ready.go:81] duration metric: took 3.596018ms waiting for pod "kube-controller-manager-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.148027 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-88pjl" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.151630 1554672 pod_ready.go:92] pod "kube-proxy-88pjl" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.151645 1554672 pod_ready.go:81] duration metric: took 3.612895ms waiting for pod "kube-proxy-88pjl" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.151654 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.328964 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:52.333708 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:52.405187 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:52.406370 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:52.533532 1554672 pod_ready.go:92] pod "kube-scheduler-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.533558 1554672 pod_ready.go:81] duration metric: took 381.895022ms waiting for pod "kube-scheduler-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.533568 1554672 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.829155 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:52.839844 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:52.905885 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:52.906272 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:53.346344 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:53.352796 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:53.409937 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:53.410482 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:53.834056 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:53.834773 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:53.907214 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:53.907172 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:54.331399 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:54.336335 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:54.407048 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:54.410847 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:54.829058 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:54.833883 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:54.905684 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:54.906829 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:54.944019 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:55.328849 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:55.333455 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:55.406435 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:55.408050 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:55.834184 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:55.836250 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:55.907784 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:55.908229 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:56.340402 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:56.341855 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:56.405913 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:56.406308 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:56.829718 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:56.840586 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:56.908288 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:56.908568 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:56.948818 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:57.328503 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:57.334462 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:57.406776 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:57.407190 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:57.828588 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:57.833847 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:57.905081 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:57.906429 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:58.329593 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:58.335086 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:58.405528 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:58.406555 1554672 kapi.go:108] duration metric: took 51.552974836s to wait for kubernetes.io/minikube-addons=registry ...
	I0817 01:52:58.829266 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:58.833517 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:58.905974 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:59.342609 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:59.348252 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:59.405313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:59.444841 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:59.828685 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:59.833928 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:59.905309 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:00.328962 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:00.333845 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:00.405313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:00.829039 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:00.834166 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:00.904823 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:01.328747 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:01.334336 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:01.404643 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:01.829758 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:01.835420 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:01.905318 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:01.945948 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:53:02.376424 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:02.377873 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:02.404990 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:02.828812 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:02.833383 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:02.904641 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:03.329032 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:03.337245 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:03.406764 1554672 kapi.go:108] duration metric: took 56.548723137s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0817 01:53:03.408669 1554672 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-20210817015042-1554185 cluster.
	I0817 01:53:03.410521 1554672 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0817 01:53:03.412326 1554672 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0817 01:53:03.448173 1554672 pod_ready.go:92] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"True"
	I0817 01:53:03.448196 1554672 pod_ready.go:81] duration metric: took 10.914620384s waiting for pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace to be "Ready" ...
	I0817 01:53:03.448215 1554672 pod_ready.go:38] duration metric: took 30.334547327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 01:53:03.448235 1554672 api_server.go:50] waiting for apiserver process to appear ...
	I0817 01:53:03.448250 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 01:53:03.448304 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 01:53:03.564171 1554672 cri.go:76] found id: "eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:03.564232 1554672 cri.go:76] found id: ""
	I0817 01:53:03.564250 1554672 logs.go:270] 1 containers: [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8]
	I0817 01:53:03.564343 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.575403 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 01:53:03.575484 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 01:53:03.604432 1554672 cri.go:76] found id: "29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:03.604494 1554672 cri.go:76] found id: ""
	I0817 01:53:03.604513 1554672 logs.go:270] 1 containers: [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4]
	I0817 01:53:03.604561 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.607149 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 01:53:03.607215 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 01:53:03.632895 1554672 cri.go:76] found id: "13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:03.632908 1554672 cri.go:76] found id: ""
	I0817 01:53:03.632913 1554672 logs.go:270] 1 containers: [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012]
	I0817 01:53:03.632967 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.635372 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 01:53:03.635435 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 01:53:03.664635 1554672 cri.go:76] found id: "615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:03.664650 1554672 cri.go:76] found id: ""
	I0817 01:53:03.664655 1554672 logs.go:270] 1 containers: [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13]
	I0817 01:53:03.664689 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.667197 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 01:53:03.667270 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 01:53:03.691527 1554672 cri.go:76] found id: "0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:03.691545 1554672 cri.go:76] found id: ""
	I0817 01:53:03.691550 1554672 logs.go:270] 1 containers: [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c]
	I0817 01:53:03.691582 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.693995 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 01:53:03.694060 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 01:53:03.717435 1554672 cri.go:76] found id: ""
	I0817 01:53:03.717475 1554672 logs.go:270] 0 containers: []
	W0817 01:53:03.717489 1554672 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 01:53:03.717495 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 01:53:03.717533 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 01:53:03.741717 1554672 cri.go:76] found id: "783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:03.741734 1554672 cri.go:76] found id: ""
	I0817 01:53:03.741739 1554672 logs.go:270] 1 containers: [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892]
	I0817 01:53:03.741798 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.744804 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 01:53:03.744851 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 01:53:03.771775 1554672 cri.go:76] found id: "52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:03.771789 1554672 cri.go:76] found id: ""
	I0817 01:53:03.771794 1554672 logs.go:270] 1 containers: [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393]
	I0817 01:53:03.771831 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.774470 1554672 logs.go:123] Gathering logs for etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] ...
	I0817 01:53:03.774489 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:03.801776 1554672 logs.go:123] Gathering logs for coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] ...
	I0817 01:53:03.801798 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:03.837058 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:03.840579 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:03.843933 1554672 logs.go:123] Gathering logs for kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] ...
	I0817 01:53:03.843957 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:03.898510 1554672 logs.go:123] Gathering logs for kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] ...
	I0817 01:53:03.898538 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:03.952593 1554672 logs.go:123] Gathering logs for kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] ...
	I0817 01:53:03.952621 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:04.082990 1554672 logs.go:123] Gathering logs for containerd ...
	I0817 01:53:04.083052 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0817 01:53:04.223853 1554672 logs.go:123] Gathering logs for kubelet ...
	I0817 01:53:04.223887 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 01:53:04.331965 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:04.338761 1554672 logs.go:123] Gathering logs for dmesg ...
	I0817 01:53:04.340534 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 01:53:04.342392 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:04.357212 1554672 logs.go:123] Gathering logs for describe nodes ...
	I0817 01:53:04.357263 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 01:53:04.694598 1554672 logs.go:123] Gathering logs for kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] ...
	I0817 01:53:04.694717 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:04.828761 1554672 logs.go:123] Gathering logs for storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] ...
	I0817 01:53:04.828816 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:04.851348 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:04.852551 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:04.876644 1554672 logs.go:123] Gathering logs for container status ...
	I0817 01:53:04.876688 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 01:53:05.331362 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:05.343522 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:05.831960 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:05.841720 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:06.329544 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:06.334286 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:06.829369 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:06.833923 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:07.328774 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:07.334115 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:07.467368 1554672 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 01:53:07.487646 1554672 api_server.go:70] duration metric: took 1m9.278576044s to wait for apiserver process to appear ...
	I0817 01:53:07.487700 1554672 api_server.go:86] waiting for apiserver healthz status ...
	I0817 01:53:07.487733 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 01:53:07.487806 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 01:53:07.534592 1554672 cri.go:76] found id: "eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:07.534644 1554672 cri.go:76] found id: ""
	I0817 01:53:07.534661 1554672 logs.go:270] 1 containers: [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8]
	I0817 01:53:07.534726 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.538672 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 01:53:07.538745 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 01:53:07.572611 1554672 cri.go:76] found id: "29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:07.572657 1554672 cri.go:76] found id: ""
	I0817 01:53:07.572674 1554672 logs.go:270] 1 containers: [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4]
	I0817 01:53:07.572739 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.576722 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 01:53:07.576801 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 01:53:07.611541 1554672 cri.go:76] found id: "13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:07.611559 1554672 cri.go:76] found id: ""
	I0817 01:53:07.611564 1554672 logs.go:270] 1 containers: [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012]
	I0817 01:53:07.611627 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.614311 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 01:53:07.614389 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 01:53:07.641823 1554672 cri.go:76] found id: "615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:07.641859 1554672 cri.go:76] found id: ""
	I0817 01:53:07.641864 1554672 logs.go:270] 1 containers: [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13]
	I0817 01:53:07.641897 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.644712 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 01:53:07.644770 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 01:53:07.667773 1554672 cri.go:76] found id: "0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:07.667788 1554672 cri.go:76] found id: ""
	I0817 01:53:07.667793 1554672 logs.go:270] 1 containers: [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c]
	I0817 01:53:07.667831 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.670409 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 01:53:07.670478 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 01:53:07.695746 1554672 cri.go:76] found id: ""
	I0817 01:53:07.695763 1554672 logs.go:270] 0 containers: []
	W0817 01:53:07.695768 1554672 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 01:53:07.695784 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 01:53:07.695828 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 01:53:07.727549 1554672 cri.go:76] found id: "783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:07.727592 1554672 cri.go:76] found id: ""
	I0817 01:53:07.727608 1554672 logs.go:270] 1 containers: [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892]
	I0817 01:53:07.727672 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.731096 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 01:53:07.731168 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 01:53:07.758719 1554672 cri.go:76] found id: "52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:07.758734 1554672 cri.go:76] found id: ""
	I0817 01:53:07.758739 1554672 logs.go:270] 1 containers: [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393]
	I0817 01:53:07.758787 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.761946 1554672 logs.go:123] Gathering logs for kubelet ...
	I0817 01:53:07.761964 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 01:53:07.830586 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:07.834021 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:07.863604 1554672 logs.go:123] Gathering logs for coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] ...
	I0817 01:53:07.863626 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:07.887301 1554672 logs.go:123] Gathering logs for kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] ...
	I0817 01:53:07.887356 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:07.918171 1554672 logs.go:123] Gathering logs for containerd ...
	I0817 01:53:07.918195 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0817 01:53:08.012682 1554672 logs.go:123] Gathering logs for container status ...
	I0817 01:53:08.012712 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 01:53:08.059071 1554672 logs.go:123] Gathering logs for kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] ...
	I0817 01:53:08.059126 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:08.163276 1554672 logs.go:123] Gathering logs for dmesg ...
	I0817 01:53:08.163302 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 01:53:08.176772 1554672 logs.go:123] Gathering logs for describe nodes ...
	I0817 01:53:08.176790 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 01:53:08.330227 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:08.344515 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:08.425430 1554672 logs.go:123] Gathering logs for kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] ...
	I0817 01:53:08.425453 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:08.486450 1554672 logs.go:123] Gathering logs for etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] ...
	I0817 01:53:08.486475 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:08.515454 1554672 logs.go:123] Gathering logs for kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] ...
	I0817 01:53:08.515475 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:08.542038 1554672 logs.go:123] Gathering logs for storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] ...
	I0817 01:53:08.542057 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:08.828741 1554672 kapi.go:108] duration metric: took 1m7.524156223s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0817 01:53:08.834977 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:09.335143 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:09.835186 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:10.335936 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:10.834892 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:11.068088 1554672 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 01:53:11.076771 1554672 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 01:53:11.077605 1554672 api_server.go:139] control plane version: v1.21.3
	I0817 01:53:11.077645 1554672 api_server.go:129] duration metric: took 3.589928004s to wait for apiserver health ...
	I0817 01:53:11.077667 1554672 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 01:53:11.077694 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 01:53:11.077770 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 01:53:11.134012 1554672 cri.go:76] found id: "eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:11.134030 1554672 cri.go:76] found id: ""
	I0817 01:53:11.134035 1554672 logs.go:270] 1 containers: [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8]
	I0817 01:53:11.134081 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.136813 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 01:53:11.136882 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 01:53:11.158746 1554672 cri.go:76] found id: "29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:11.158763 1554672 cri.go:76] found id: ""
	I0817 01:53:11.158768 1554672 logs.go:270] 1 containers: [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4]
	I0817 01:53:11.158868 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.161890 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 01:53:11.161955 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 01:53:11.185618 1554672 cri.go:76] found id: "13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:11.185638 1554672 cri.go:76] found id: ""
	I0817 01:53:11.185643 1554672 logs.go:270] 1 containers: [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012]
	I0817 01:53:11.185698 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.188273 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 01:53:11.188341 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 01:53:11.212061 1554672 cri.go:76] found id: "615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:11.212084 1554672 cri.go:76] found id: ""
	I0817 01:53:11.212104 1554672 logs.go:270] 1 containers: [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13]
	I0817 01:53:11.212154 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.214710 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 01:53:11.214777 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 01:53:11.254063 1554672 cri.go:76] found id: "0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:11.254080 1554672 cri.go:76] found id: ""
	I0817 01:53:11.254086 1554672 logs.go:270] 1 containers: [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c]
	I0817 01:53:11.254150 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.257322 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 01:53:11.257386 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 01:53:11.280677 1554672 cri.go:76] found id: ""
	I0817 01:53:11.280719 1554672 logs.go:270] 0 containers: []
	W0817 01:53:11.280735 1554672 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 01:53:11.280749 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 01:53:11.280792 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 01:53:11.302301 1554672 cri.go:76] found id: "783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:11.302344 1554672 cri.go:76] found id: ""
	I0817 01:53:11.302359 1554672 logs.go:270] 1 containers: [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892]
	I0817 01:53:11.302405 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.305069 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 01:53:11.305128 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 01:53:11.334791 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:11.337025 1554672 cri.go:76] found id: "52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:11.337041 1554672 cri.go:76] found id: ""
	I0817 01:53:11.337046 1554672 logs.go:270] 1 containers: [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393]
	I0817 01:53:11.337097 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.340390 1554672 logs.go:123] Gathering logs for kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] ...
	I0817 01:53:11.340407 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:11.377298 1554672 logs.go:123] Gathering logs for storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] ...
	I0817 01:53:11.377344 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:11.408451 1554672 logs.go:123] Gathering logs for containerd ...
	I0817 01:53:11.408473 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0817 01:53:11.514559 1554672 logs.go:123] Gathering logs for container status ...
	I0817 01:53:11.514589 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 01:53:11.567396 1554672 logs.go:123] Gathering logs for kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] ...
	I0817 01:53:11.567423 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:11.625821 1554672 logs.go:123] Gathering logs for etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] ...
	I0817 01:53:11.625847 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:11.652282 1554672 logs.go:123] Gathering logs for coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] ...
	I0817 01:53:11.652306 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:11.675002 1554672 logs.go:123] Gathering logs for kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] ...
	I0817 01:53:11.675047 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:11.697704 1554672 logs.go:123] Gathering logs for kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] ...
	I0817 01:53:11.697724 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:11.745590 1554672 logs.go:123] Gathering logs for kubelet ...
	I0817 01:53:11.745611 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 01:53:11.836311 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:11.837956 1554672 logs.go:123] Gathering logs for dmesg ...
	I0817 01:53:11.837993 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 01:53:11.865409 1554672 logs.go:123] Gathering logs for describe nodes ...
	I0817 01:53:11.865430 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 01:53:12.335417 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:12.834938 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:13.335078 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:13.835219 1554672 kapi.go:108] duration metric: took 1m6.512977174s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0817 01:53:13.838858 1554672 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, volumesnapshots, olm, registry, gcp-auth, ingress, csi-hostpath-driver
	I0817 01:53:13.838918 1554672 addons.go:344] enableAddons completed in 1m15.630038865s
	I0817 01:53:14.513128 1554672 system_pods.go:59] 18 kube-system pods found
	I0817 01:53:14.513163 1554672 system_pods.go:61] "coredns-558bd4d5db-sxct6" [954e41e3-b5a4-4fa8-8926-fa9f53507414] Running
	I0817 01:53:14.513169 1554672 system_pods.go:61] "csi-hostpath-attacher-0" [ae96aa4a-3965-45a9-8e94-db9fee5bcae1] Running
	I0817 01:53:14.513174 1554672 system_pods.go:61] "csi-hostpath-provisioner-0" [e6cedc35-32e7-4be9-907a-063eaa25f07d] Running
	I0817 01:53:14.513178 1554672 system_pods.go:61] "csi-hostpath-resizer-0" [9c8b4283-1ee1-45a6-ace0-e0096867592a] Running
	I0817 01:53:14.513183 1554672 system_pods.go:61] "csi-hostpath-snapshotter-0" [d23b3d96-e125-4669-a694-40c25a9ca2bc] Running
	I0817 01:53:14.513189 1554672 system_pods.go:61] "csi-hostpathplugin-0" [50373dc3-be79-4049-b2f4-e19bb0a79c10] Running
	I0817 01:53:14.513193 1554672 system_pods.go:61] "etcd-addons-20210817015042-1554185" [b0a759e2-33c6-486b-aee8-e1019669fb12] Running
	I0817 01:53:14.513200 1554672 system_pods.go:61] "kindnet-xp2kn" [234e19f8-3cdd-4c44-9dff-290f932bba79] Running
	I0817 01:53:14.513205 1554672 system_pods.go:61] "kube-apiserver-addons-20210817015042-1554185" [3704ee0c-53da-4106-b407-9c6829a74921] Running
	I0817 01:53:14.513215 1554672 system_pods.go:61] "kube-controller-manager-addons-20210817015042-1554185" [a7e9992d-6083-4141-a778-7ab31067cb40] Running
	I0817 01:53:14.513220 1554672 system_pods.go:61] "kube-proxy-88pjl" [3152779f-8eaa-4982-8a07-a39f7c215086] Running
	I0817 01:53:14.513225 1554672 system_pods.go:61] "kube-scheduler-addons-20210817015042-1554185" [3e4bab6f-a0a1-46e8-83dd-f7b11f4e9d62] Running
	I0817 01:53:14.513229 1554672 system_pods.go:61] "metrics-server-77c99ccb96-x8mh4" [634199c2-0567-4fb7-b5d6-2205aa4fcfd4] Running
	I0817 01:53:14.513238 1554672 system_pods.go:61] "registry-9np4b" [1b228b1c-c9df-4c36-a0f4-2a2fb6ec967b] Running
	I0817 01:53:14.513247 1554672 system_pods.go:61] "registry-proxy-p5xh8" [0a7638e5-9e17-4626-aeb9-b7fe2abe695d] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 01:53:14.513257 1554672 system_pods.go:61] "snapshot-controller-989f9ddc8-rcswn" [0d6290bd-2e2d-4e14-b299-f4b14ea2de3b] Running
	I0817 01:53:14.513264 1554672 system_pods.go:61] "snapshot-controller-989f9ddc8-zqgfr" [7ceff31f-ed10-44dd-8c0f-7063e87beadc] Running
	I0817 01:53:14.513274 1554672 system_pods.go:61] "storage-provisioner" [3f4cb2a6-c88b-486f-bba7-cef64ca39e9a] Running
	I0817 01:53:14.513279 1554672 system_pods.go:74] duration metric: took 3.43559739s to wait for pod list to return data ...
	I0817 01:53:14.513290 1554672 default_sa.go:34] waiting for default service account to be created ...
	I0817 01:53:14.515707 1554672 default_sa.go:45] found service account: "default"
	I0817 01:53:14.515727 1554672 default_sa.go:55] duration metric: took 2.432583ms for default service account to be created ...
	I0817 01:53:14.515734 1554672 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 01:53:14.523274 1554672 system_pods.go:86] 18 kube-system pods found
	I0817 01:53:14.523301 1554672 system_pods.go:89] "coredns-558bd4d5db-sxct6" [954e41e3-b5a4-4fa8-8926-fa9f53507414] Running
	I0817 01:53:14.523309 1554672 system_pods.go:89] "csi-hostpath-attacher-0" [ae96aa4a-3965-45a9-8e94-db9fee5bcae1] Running
	I0817 01:53:14.523314 1554672 system_pods.go:89] "csi-hostpath-provisioner-0" [e6cedc35-32e7-4be9-907a-063eaa25f07d] Running
	I0817 01:53:14.523324 1554672 system_pods.go:89] "csi-hostpath-resizer-0" [9c8b4283-1ee1-45a6-ace0-e0096867592a] Running
	I0817 01:53:14.523332 1554672 system_pods.go:89] "csi-hostpath-snapshotter-0" [d23b3d96-e125-4669-a694-40c25a9ca2bc] Running
	I0817 01:53:14.523338 1554672 system_pods.go:89] "csi-hostpathplugin-0" [50373dc3-be79-4049-b2f4-e19bb0a79c10] Running
	I0817 01:53:14.523346 1554672 system_pods.go:89] "etcd-addons-20210817015042-1554185" [b0a759e2-33c6-486b-aee8-e1019669fb12] Running
	I0817 01:53:14.523351 1554672 system_pods.go:89] "kindnet-xp2kn" [234e19f8-3cdd-4c44-9dff-290f932bba79] Running
	I0817 01:53:14.523364 1554672 system_pods.go:89] "kube-apiserver-addons-20210817015042-1554185" [3704ee0c-53da-4106-b407-9c6829a74921] Running
	I0817 01:53:14.523369 1554672 system_pods.go:89] "kube-controller-manager-addons-20210817015042-1554185" [a7e9992d-6083-4141-a778-7ab31067cb40] Running
	I0817 01:53:14.523377 1554672 system_pods.go:89] "kube-proxy-88pjl" [3152779f-8eaa-4982-8a07-a39f7c215086] Running
	I0817 01:53:14.523382 1554672 system_pods.go:89] "kube-scheduler-addons-20210817015042-1554185" [3e4bab6f-a0a1-46e8-83dd-f7b11f4e9d62] Running
	I0817 01:53:14.523391 1554672 system_pods.go:89] "metrics-server-77c99ccb96-x8mh4" [634199c2-0567-4fb7-b5d6-2205aa4fcfd4] Running
	I0817 01:53:14.523396 1554672 system_pods.go:89] "registry-9np4b" [1b228b1c-c9df-4c36-a0f4-2a2fb6ec967b] Running
	I0817 01:53:14.523405 1554672 system_pods.go:89] "registry-proxy-p5xh8" [0a7638e5-9e17-4626-aeb9-b7fe2abe695d] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 01:53:14.523414 1554672 system_pods.go:89] "snapshot-controller-989f9ddc8-rcswn" [0d6290bd-2e2d-4e14-b299-f4b14ea2de3b] Running
	I0817 01:53:14.523429 1554672 system_pods.go:89] "snapshot-controller-989f9ddc8-zqgfr" [7ceff31f-ed10-44dd-8c0f-7063e87beadc] Running
	I0817 01:53:14.523434 1554672 system_pods.go:89] "storage-provisioner" [3f4cb2a6-c88b-486f-bba7-cef64ca39e9a] Running
	I0817 01:53:14.523439 1554672 system_pods.go:126] duration metric: took 7.700756ms to wait for k8s-apps to be running ...
	I0817 01:53:14.523449 1554672 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 01:53:14.523496 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 01:53:14.532286 1554672 system_svc.go:56] duration metric: took 8.834069ms WaitForService to wait for kubelet.
	I0817 01:53:14.532341 1554672 kubeadm.go:547] duration metric: took 1m16.323273553s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 01:53:14.532368 1554672 node_conditions.go:102] verifying NodePressure condition ...
	I0817 01:53:14.535572 1554672 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 01:53:14.535600 1554672 node_conditions.go:123] node cpu capacity is 2
	I0817 01:53:14.535613 1554672 node_conditions.go:105] duration metric: took 3.24014ms to run NodePressure ...
	I0817 01:53:14.535627 1554672 start.go:231] waiting for startup goroutines ...
	I0817 01:53:14.849964 1554672 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 01:53:14.851936 1554672 out.go:177] * Done! kubectl is now configured to use "addons-20210817015042-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID
	5eebb444a6ef1       04bd8b4e0d303       About a minute ago   Running             task-pv-container                        0                   4ff6bc2905fc1
	6f369eecd6011       d544402579747       About a minute ago   Exited              catalog-operator                         7                   d7bf81a0ac291
	2de95bad668aa       d544402579747       2 minutes ago        Exited              olm-operator                             7                   3745d022afa82
	fcf13ae398afa       1611cd07b61d5       8 minutes ago        Running             busybox                                  0                   f1d131a615a5f
	79b64e9292026       ab63026e5f864       12 minutes ago       Running             liveness-probe                           0                   e1b67cc269ffc
	86d072c7d6f7f       f8f69c8b53974       12 minutes ago       Running             hostpath                                 0                   e1b67cc269ffc
	95f12ea0ee9f4       1f46a863d2aa9       12 minutes ago       Running             node-driver-registrar                    0                   e1b67cc269ffc
	9803a3ca0028f       bac9ddccb0c70       12 minutes ago       Running             controller                               0                   8a5c5f5789f9b
	d76b5b43f143e       b4df90000e547       12 minutes ago       Running             csi-external-health-monitor-controller   0                   e1b67cc269ffc
	33a3dc8565cfc       69724f415cab8       12 minutes ago       Running             csi-attacher                             0                   0ae516c846f1f
	7cf8bb6cdcfe2       a883f7fc35610       12 minutes ago       Exited              patch                                    0                   8703f481d86b7
	b2165d1abb5e5       a883f7fc35610       12 minutes ago       Exited              create                                   0                   95b081ab37530
	c3eb735c4bd3e       d65cad97e5f05       12 minutes ago       Running             csi-snapshotter                          0                   558825437a764
	3af33a1255a45       03c15ec36e257       12 minutes ago       Running             csi-provisioner                          0                   cb89551723f57
	2b02be61418e6       63f120615f44b       12 minutes ago       Running             csi-external-health-monitor-agent        0                   e1b67cc269ffc
	1b06e793319cd       3758cfc26c6db       12 minutes ago       Running             volume-snapshot-controller               0                   c81fe44186720
	ecf9efd7a3f01       803606888e0b1       12 minutes ago       Running             csi-resizer                              0                   f99c5fda234ab
	783c0958684bd       ba04bb24b9575       12 minutes ago       Running             storage-provisioner                      0                   0cde084873a62
	13be13e3410ac       1a1f05a2cd7c2       12 minutes ago       Running             coredns                                  0                   00cb17ddd7f4a
	6fe738b9a8dba       3758cfc26c6db       12 minutes ago       Running             volume-snapshot-controller               0                   d0b05273cbb65
	7b33a9bf5802e       f37b7c809e5dc       13 minutes ago       Running             kindnet-cni                              0                   96dbe7c3048af
	0483eb703ed0f       4ea38350a1beb       13 minutes ago       Running             kube-proxy                               0                   f0918af3dc71f
	eacccd844ca10       44a6d50ef170d       13 minutes ago       Running             kube-apiserver                           0                   a18344960e958
	615d16acf0dc7       31a3b96cefc1e       13 minutes ago       Running             kube-scheduler                           0                   99c49ff38f4e8
	29af4eb3039bc       05b738aa1bc63       13 minutes ago       Running             etcd                                     0                   c6d8e2c4d15ca
	52a4c60d098e5       cb310ff289d79       13 minutes ago       Running             kube-controller-manager                  0                   437a86afaf37b
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 01:50:50 UTC, end at Tue 2021-08-17 02:05:26 UTC. --
	Aug 17 02:03:57 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:57.795030716Z" level=error msg="copy shim log" error="read /proc/self/fd/154: file already closed"
	Aug 17 02:03:57 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:57.879585966Z" level=info msg="TearDown network for sandbox \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\" successfully"
	Aug 17 02:03:57 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:57.879614167Z" level=info msg="StopPodSandbox for \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\" returns successfully"
	Aug 17 02:03:57 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:57.890930033Z" level=info msg="RemoveContainer for \"eb2360810df1ac87246f467960ae5f4f48f88e2d9520e5916cc5533a65753351\""
	Aug 17 02:03:57 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:57.902591627Z" level=info msg="RemoveContainer for \"eb2360810df1ac87246f467960ae5f4f48f88e2d9520e5916cc5533a65753351\" returns successfully"
	Aug 17 02:03:57 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:57.911020113Z" level=error msg="ContainerStatus for \"eb2360810df1ac87246f467960ae5f4f48f88e2d9520e5916cc5533a65753351\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb2360810df1ac87246f467960ae5f4f48f88e2d9520e5916cc5533a65753351\": not found"
	Aug 17 02:03:58 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:58.878969692Z" level=info msg="StopPodSandbox for \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\""
	Aug 17 02:03:58 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:58.911902385Z" level=info msg="TearDown network for sandbox \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\" successfully"
	Aug 17 02:03:58 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:58.911930372Z" level=info msg="StopPodSandbox for \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\" returns successfully"
	Aug 17 02:03:59 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:59.073632216Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:nginx,Uid:ed08e79b-1781-4708-8f29-d5b69cc3c7c6,Namespace:default,Attempt:0,}"
	Aug 17 02:03:59 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:59.150236875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be360c108dbda8ca7c6b59ecd7cb6cffb476ba94256bf81ebbedc94430717c00 pid=14165
	Aug 17 02:03:59 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:59.217627315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx,Uid:ed08e79b-1781-4708-8f29-d5b69cc3c7c6,Namespace:default,Attempt:0,} returns sandbox id \"be360c108dbda8ca7c6b59ecd7cb6cffb476ba94256bf81ebbedc94430717c00\""
	Aug 17 02:03:59 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:59.218965862Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 17 02:04:00 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:00.115379884Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:04:13 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:13.897083154Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 17 02:04:14 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:14.822268171Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:04:41 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:41.897006103Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 17 02:04:42 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:42.858879672Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:04:53 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:53.047714555Z" level=info msg="StopPodSandbox for \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\""
	Aug 17 02:04:53 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:53.058229986Z" level=info msg="TearDown network for sandbox \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\" successfully"
	Aug 17 02:04:53 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:53.058262970Z" level=info msg="StopPodSandbox for \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\" returns successfully"
	Aug 17 02:04:53 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:53.058615656Z" level=info msg="RemovePodSandbox for \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\""
	Aug 17 02:04:53 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:04:53.064378708Z" level=info msg="RemovePodSandbox \"42d312091cf20b1838a65f6978df473c492b0f75025bb83f76a8e88957979d9c\" returns successfully"
	Aug 17 02:05:24 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:05:24.897357489Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 17 02:05:25 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:05:25.800770283Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	
	* 
	* ==> coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210817015042-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-20210817015042-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=addons-20210817015042-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T01_51_45_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210817015042-1554185
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210817015042-1554185"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 01:51:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210817015042-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 02:05:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 02:03:57 +0000   Tue, 17 Aug 2021 01:51:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 02:03:57 +0000   Tue, 17 Aug 2021 01:51:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 02:03:57 +0000   Tue, 17 Aug 2021 01:51:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 02:03:57 +0000   Tue, 17 Aug 2021 01:52:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210817015042-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                ace180e0-70a7-4178-bffd-233be0529698
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  default                     nginx                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  default                     task-pv-pod                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  ingress-nginx               ingress-nginx-controller-59b45fb494-d8wsj                100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         13m
	  kube-system                 coredns-558bd4d5db-sxct6                                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     13m
	  kube-system                 csi-hostpath-attacher-0                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 csi-hostpath-provisioner-0                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 csi-hostpath-resizer-0                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 csi-hostpath-snapshotter-0                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 csi-hostpathplugin-0                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-addons-20210817015042-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-xp2kn                                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-addons-20210817015042-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-addons-20210817015042-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-88pjl                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-addons-20210817015042-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 snapshot-controller-989f9ddc8-rcswn                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 snapshot-controller-989f9ddc8-zqgfr                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  olm                         catalog-operator-75d496484d-86xl7                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (1%!)(MISSING)        0 (0%!)(MISSING)         13m
	  olm                         olm-operator-859c88c96-j28dd                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (2%!)(MISSING)       0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                970m (48%!)(MISSING)  100m (5%!)(MISSING)
	  memory             550Mi (7%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 13m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x4 over 13m)  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x3 over 13m)  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x3 over 13m)  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet     Node addons-20210817015042-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                12m                kubelet     Node addons-20210817015042-1554185 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug17 01:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] <==
	* 2021-08-17 02:01:36.802974 I | mvcc: store.index: compact 1588
	2021-08-17 02:01:36.826662 I | mvcc: finished scheduled compaction at 1588 (took 23.144433ms)
	2021-08-17 02:01:42.862033 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:01:52.861316 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:02:02.861392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:02:12.861284 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:02:22.861252 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:02:32.861856 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:02:42.862067 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:02:52.861377 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:03:02.861122 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:03:12.861370 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:03:22.861110 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:03:32.861132 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:03:42.861971 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:03:52.861476 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:04:02.861775 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:04:12.861414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:04:22.861743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:04:32.861366 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:04:42.861685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:04:52.861156 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:05:02.861363 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:05:12.861600 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:05:22.861982 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  02:05:26 up  9:47,  0 users,  load average: 0.46, 0.49, 1.08
	Linux addons-20210817015042-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] <==
	* I0817 02:00:36.142995       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:00:36.143009       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:01:17.263282       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:01:17.263422       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:01:17.263440       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:01:59.911534       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:01:59.911574       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:01:59.911582       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:02:32.231871       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:02:32.231909       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:02:32.231918       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:03:04.787394       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:03:04.787437       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:03:04.787446       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:03:39.645013       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:03:39.645160       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:03:39.645180       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:03:58.370446       1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io
	I0817 02:04:06.032090       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0817 02:04:22.523416       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:04:22.523455       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:04:22.523463       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:04:56.908929       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:04:56.908972       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:04:56.909061       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] <==
	* I0817 01:52:27.209952       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
	I0817 01:52:27.209987       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com
	I0817 01:52:27.210060       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for installplans.operators.coreos.com
	I0817 01:52:27.211453       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0817 01:52:27.412457       1 shared_informer.go:247] Caches are synced for resource quota 
	W0817 01:52:27.565215       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 01:52:27.570191       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0817 01:52:27.585755       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0817 01:52:27.587067       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0817 01:52:27.788117       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 01:52:33.056834       1 event.go:291] "Event occurred" object="kube-system/registry-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: registry-proxy-p5xh8"
	E0817 01:52:33.075112       1 daemon_controller.go:320] kube-system/registry-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"registry-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"bbc76700-77ff-4df0-928a-e381ef3cf185", ResourceVersion:"486", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764761920, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "kubernetes.io/minikube-addons":"registry"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"kubernetes.io/minikube-addons\":\"registry\"},\"name\":\"regist
ry-proxy\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"kubernetes.io/minikube-addons\":\"registry\",\"registry-proxy\":\"true\"}},\"template\":{\"metadata\":{\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"kubernetes.io/minikube-addons\":\"registry\",\"registry-proxy\":\"true\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"REGISTRY_HOST\",\"value\":\"registry.kube-system.svc.cluster.local\"},{\"name\":\"REGISTRY_PORT\",\"value\":\"80\"}],\"image\":\"gcr.io/google_containers/kube-registry-proxy:0.4@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"registry-proxy\",\"ports\":[{\"containerPort\":80,\"hostPort\":5000,\"name\":\"registry\"}]}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d7e
000), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d7e018)}, v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d7e030), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d7e048)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001b3d3e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "kubernetes.io/minikube-addons":"registry", "registry-proxy":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(n
il), Containers:[]v1.Container{v1.Container{Name:"registry-proxy", Image:"gcr.io/google_containers/kube-registry-proxy:0.4@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"registry", HostPort:5000, ContainerPort:80, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"REGISTRY_HOST", Value:"registry.kube-system.svc.cluster.local", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"REGISTRY_PORT", Value:"80", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPre
sent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001d2d158), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004f7d50), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:
v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001d63790)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001d2d16c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "registry-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0817 01:52:36.883695       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0817 01:52:56.050693       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0817 01:52:56.851746       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0817 01:52:57.251384       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	E0817 01:52:57.435870       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0817 01:52:57.652302       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	W0817 01:52:57.808910       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 01:57:26.704983       1 tokens_controller.go:262] error synchronizing serviceaccount gcp-auth/default: secrets "default-token-zg7wn" is forbidden: unable to create new content in namespace gcp-auth because it is being terminated
	I0817 01:57:48.400672       1 event.go:291] "Event occurred" object="default/hpvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0817 01:57:48.797797       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-edf3d92e-1108-4adc-a8cd-37519395465d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^85c325d7-fefe-11eb-bd30-26acc1e90309") from node "addons-20210817015042-1554185" 
	I0817 01:57:49.345399       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume "pvc-edf3d92e-1108-4adc-a8cd-37519395465d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^85c325d7-fefe-11eb-bd30-26acc1e90309") from node "addons-20210817015042-1554185" 
	I0817 01:57:49.345604       1 event.go:291] "Event occurred" object="default/task-pv-pod" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-edf3d92e-1108-4adc-a8cd-37519395465d\" "
	I0817 01:57:53.078416       1 namespace_controller.go:185] Namespace has been deleted gcp-auth
	
	* 
	* ==> kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] <==
	* I0817 01:51:59.199305       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 01:51:59.199348       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 01:51:59.199381       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 01:51:59.228513       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 01:51:59.228548       1 server_others.go:212] Using iptables Proxier.
	I0817 01:51:59.228558       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 01:51:59.228568       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 01:51:59.229489       1 server.go:643] Version: v1.21.3
	I0817 01:51:59.234867       1 config.go:315] Starting service config controller
	I0817 01:51:59.234890       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 01:51:59.236683       1 config.go:224] Starting endpoint slice config controller
	I0817 01:51:59.236698       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 01:51:59.242351       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 01:51:59.243149       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 01:51:59.338912       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 01:51:59.338971       1 shared_informer.go:247] Caches are synced for service config 
	W0817 01:58:09.244582       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 02:05:11.245597       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] <==
	* W0817 01:51:41.468231       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 01:51:41.468338       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 01:51:41.468430       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 01:51:41.611648       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 01:51:41.615019       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 01:51:41.616612       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0817 01:51:41.616756       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 01:51:41.622145       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 01:51:41.624737       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 01:51:41.627800       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 01:51:41.628373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 01:51:41.628434       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 01:51:41.628492       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 01:51:41.628547       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 01:51:41.628600       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:41.628650       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:41.628702       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:41.628754       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 01:51:41.628805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 01:51:41.630964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 01:51:41.631026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:42.555258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 01:51:42.563233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 01:51:42.595603       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 01:51:44.616129       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 01:50:50 UTC, end at Tue 2021-08-17 02:05:26 UTC. --
	Aug 17 02:04:45 addons-20210817015042-1554185 kubelet[1147]: I0817 02:04:45.896297    1147 scope.go:111] "RemoveContainer" containerID="6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504"
	Aug 17 02:04:45 addons-20210817015042-1554185 kubelet[1147]: E0817 02:04:45.896737    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 02:04:54 addons-20210817015042-1554185 kubelet[1147]: I0817 02:04:54.895694    1147 scope.go:111] "RemoveContainer" containerID="2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2"
	Aug 17 02:04:54 addons-20210817015042-1554185 kubelet[1147]: E0817 02:04:54.896446    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	Aug 17 02:04:56 addons-20210817015042-1554185 kubelet[1147]: I0817 02:04:56.896395    1147 scope.go:111] "RemoveContainer" containerID="6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504"
	Aug 17 02:04:56 addons-20210817015042-1554185 kubelet[1147]: E0817 02:04:56.897168    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 02:04:57 addons-20210817015042-1554185 kubelet[1147]: E0817 02:04:57.896857    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx" podUID=ed08e79b-1781-4708-8f29-d5b69cc3c7c6
	Aug 17 02:05:05 addons-20210817015042-1554185 kubelet[1147]: I0817 02:05:05.896110    1147 scope.go:111] "RemoveContainer" containerID="2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2"
	Aug 17 02:05:05 addons-20210817015042-1554185 kubelet[1147]: E0817 02:05:05.896524    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	Aug 17 02:05:08 addons-20210817015042-1554185 kubelet[1147]: I0817 02:05:08.896336    1147 scope.go:111] "RemoveContainer" containerID="6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504"
	Aug 17 02:05:08 addons-20210817015042-1554185 kubelet[1147]: E0817 02:05:08.896738    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 02:05:12 addons-20210817015042-1554185 kubelet[1147]: E0817 02:05:12.897727    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx" podUID=ed08e79b-1781-4708-8f29-d5b69cc3c7c6
	Aug 17 02:05:17 addons-20210817015042-1554185 kubelet[1147]: I0817 02:05:17.202344    1147 clientconn.go:106] parsed scheme: ""
	Aug 17 02:05:17 addons-20210817015042-1554185 kubelet[1147]: I0817 02:05:17.202370    1147 clientconn.go:106] scheme "" not registered, fallback to default scheme
	Aug 17 02:05:17 addons-20210817015042-1554185 kubelet[1147]: I0817 02:05:17.202414    1147 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi-hostpath/csi.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 17 02:05:17 addons-20210817015042-1554185 kubelet[1147]: I0817 02:05:17.202425    1147 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 17 02:05:17 addons-20210817015042-1554185 kubelet[1147]: I0817 02:05:17.202460    1147 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	Aug 17 02:05:19 addons-20210817015042-1554185 kubelet[1147]: I0817 02:05:19.895797    1147 scope.go:111] "RemoveContainer" containerID="2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2"
	Aug 17 02:05:19 addons-20210817015042-1554185 kubelet[1147]: E0817 02:05:19.896176    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	Aug 17 02:05:23 addons-20210817015042-1554185 kubelet[1147]: I0817 02:05:23.895714    1147 scope.go:111] "RemoveContainer" containerID="6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504"
	Aug 17 02:05:23 addons-20210817015042-1554185 kubelet[1147]: E0817 02:05:23.896107    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 02:05:25 addons-20210817015042-1554185 kubelet[1147]: E0817 02:05:25.800982    1147 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:alpine"
	Aug 17 02:05:25 addons-20210817015042-1554185 kubelet[1147]: E0817 02:05:25.801035    1147 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:alpine"
	Aug 17 02:05:25 addons-20210817015042-1554185 kubelet[1147]: E0817 02:05:25.801111    1147 kuberuntime_manager.go:864] container &Container{Name:nginx,Image:nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnxzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod nginx_default(ed08e79b-1781-4708-8f29-d5b69cc3c7c6): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "d
ocker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 17 02:05:25 addons-20210817015042-1554185 kubelet[1147]: E0817 02:05:25.801161    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID=ed08e79b-1781-4708-8f29-d5b69cc3c7c6
	
	* 
	* ==> storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] <==
	* I0817 01:52:45.168349       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 01:52:45.223745       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 01:52:45.226921       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 01:52:45.243264       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 01:52:45.243748       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"41860dbd-59f4-40f3-b06c-d38f89989bf1", APIVersion:"v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210817015042-1554185_09f87493-b972-47f4-9131-044097fd5d01 became leader
	I0817 01:52:45.243789       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210817015042-1554185_09f87493-b972-47f4-9131-044097fd5d01!
	I0817 01:52:45.346906       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210817015042-1554185_09f87493-b972-47f4-9131-044097fd5d01!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210817015042-1554185 -n addons-20210817015042-1554185
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210817015042-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: nginx ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/Olm]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210817015042-1554185 describe pod nginx ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210817015042-1554185 describe pod nginx ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j: exit status 1 (87.989587ms)

                                                
                                                
-- stdout --
	Name:         nginx
	Namespace:    default
	Priority:     0
	Node:         addons-20210817015042-1554185/192.168.49.2
	Start Time:   Tue, 17 Aug 2021 02:03:58 +0000
	Labels:       run=nginx
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  nginx:
	    Container ID:   
	    Image:          nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nnxzp (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-nnxzp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  89s                default-scheduler  Successfully assigned default/nginx to addons-20210817015042-1554185
	  Normal   BackOff    15s (x4 over 87s)  kubelet            Back-off pulling image "nginx:alpine"
	  Warning  Failed     15s (x4 over 87s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    3s (x4 over 88s)   kubelet            Pulling image "nginx:alpine"
	  Warning  Failed     2s (x4 over 87s)   kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2s (x4 over 87s)   kubelet            Error: ErrImagePull

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-msw6w" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xpb6j" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210817015042-1554185 describe pod nginx ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j: exit status 1
--- FAIL: TestAddons/parallel/Olm (732.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (363.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 9.248465ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210817015042-1554185 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210817015042-1554185 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210817015042-1554185 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [c51fcafb-3087-4d61-8189-5d6ec7ef33ac] Pending
helpers_test.go:343: "task-pv-pod" [c51fcafb-3087-4d61-8189-5d6ec7ef33ac] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: timed out waiting for the condition ****
addons_test.go:544: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210817015042-1554185 -n addons-20210817015042-1554185
addons_test.go:544: TestAddons/parallel/CSI: showing logs for failed pods as of 2021-08-17 02:03:49.081842305 +0000 UTC m=+859.876590090
addons_test.go:544: (dbg) Run:  kubectl --context addons-20210817015042-1554185 describe po task-pv-pod -n default
addons_test.go:544: (dbg) kubectl --context addons-20210817015042-1554185 describe po task-pv-pod -n default:
Name:         task-pv-pod
Namespace:    default
Priority:     0
Node:         addons-20210817015042-1554185/192.168.49.2
Start Time:   Tue, 17 Aug 2021 01:57:48 +0000
Labels:       app=task-pv-pod
Annotations:  <none>
Status:       Pending
IP:           10.244.0.23
IPs:
IP:  10.244.0.23
Containers:
task-pv-container:
Container ID:   
Image:          nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jc6x7 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-jc6x7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason                   Age                    From                                      Message
----     ------                   ----                   ----                                      -------
Normal   Scheduled                6m1s                   default-scheduler                         Successfully assigned default/task-pv-pod to addons-20210817015042-1554185
Warning  VolumeConditionAbnormal  6m1s (x10 over 6m1s)   csi-pv-monitor-agent-hostpath.csi.k8s.io  The volume isn't mounted
Normal   SuccessfulAttachVolume   6m                     attachdetach-controller                   AttachVolume.Attach succeeded for volume "pvc-edf3d92e-1108-4adc-a8cd-37519395465d"
Warning  Failed                   5m15s                  kubelet                                   Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:7ef3ca6ca846a10787f98fd2722d6e4054a17b37981a3ca273207a792731aebe: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling                  4m25s (x4 over 5m53s)  kubelet                                   Pulling image "nginx"
Warning  Failed                   4m24s (x3 over 5m52s)  kubelet                                   Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed                   4m24s (x4 over 5m52s)  kubelet                                   Error: ErrImagePull
Warning  Failed                   4m10s (x6 over 5m51s)  kubelet                                   Error: ImagePullBackOff
Normal   VolumeConditionNormal    61s (x41 over 5m1s)    csi-pv-monitor-agent-hostpath.csi.k8s.io  The Volume returns to the healthy state
Normal   BackOff                  53s (x20 over 5m51s)   kubelet                                   Back-off pulling image "nginx"
addons_test.go:544: (dbg) Run:  kubectl --context addons-20210817015042-1554185 logs task-pv-pod -n default
addons_test.go:544: (dbg) Non-zero exit: kubectl --context addons-20210817015042-1554185 logs task-pv-pod -n default: exit status 1 (98.29574ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:544: kubectl --context addons-20210817015042-1554185 logs task-pv-pod -n default: exit status 1
addons_test.go:545: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210817015042-1554185
helpers_test.go:236: (dbg) docker inspect addons-20210817015042-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416",
	        "Created": "2021-08-17T01:50:49.008425565Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1555108,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T01:50:49.513909075Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/hosts",
	        "LogPath": "/var/lib/docker/containers/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416/d0219469219e57c667e6c3252ac9e065172292c6d438efd3d0168be96c5bb416-json.log",
	        "Name": "/addons-20210817015042-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210817015042-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210817015042-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61/merged",
	                "UpperDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61/diff",
	                "WorkDir": "/var/lib/docker/overlay2/649ebd8b395b544986f7db549a6ef922a860fbc06e3b2b61c6a31f45df04fa61/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210817015042-1554185",
	                "Source": "/var/lib/docker/volumes/addons-20210817015042-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210817015042-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210817015042-1554185",
	                "name.minikube.sigs.k8s.io": "addons-20210817015042-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3e0a22fba78ee7873eb198b4450cb747bf4f2dc90aa87985648e04a1bfa9520",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50314"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50313"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50310"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50312"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50311"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c3e0a22fba78",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210817015042-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d0219469219e",
	                        "addons-20210817015042-1554185"
	                    ],
	                    "NetworkID": "a9a617dbec2c4687c7bfc4bea262a36b8329d70029602dc944aed84d4dfb4f83",
	                    "EndpointID": "dad39de7953aad4709a05c2c9027de032d29f0302e6751762f5bb275759d2909",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-20210817015042-1554185 -n addons-20210817015042-1554185
helpers_test.go:245: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210817015042-1554185 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p addons-20210817015042-1554185 logs -n 25: (1.3249421s)
helpers_test.go:253: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                  | download-only-20210817014929-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:28 UTC | Tue, 17 Aug 2021 01:50:28 UTC |
	| delete  | -p                                     | download-only-20210817014929-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:28 UTC | Tue, 17 Aug 2021 01:50:28 UTC |
	|         | download-only-20210817014929-1554185   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-only-20210817014929-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:28 UTC | Tue, 17 Aug 2021 01:50:28 UTC |
	|         | download-only-20210817014929-1554185   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-docker-20210817015028-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:42 UTC | Tue, 17 Aug 2021 01:50:42 UTC |
	|         | download-docker-20210817015028-1554185 |                                        |         |         |                               |                               |
	| start   | -p                                     | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:50:43 UTC | Tue, 17 Aug 2021 01:53:14 UTC |
	|         | addons-20210817015042-1554185          |                                        |         |         |                               |                               |
	|         | --wait=true --memory=4000              |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --addons=registry                      |                                        |         |         |                               |                               |
	|         | --addons=metrics-server                |                                        |         |         |                               |                               |
	|         | --addons=olm                           |                                        |         |         |                               |                               |
	|         | --addons=volumesnapshots               |                                        |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver           |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --addons=ingress                       |                                        |         |         |                               |                               |
	|         | --addons=gcp-auth                      |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:54:25 UTC | Tue, 17 Aug 2021 01:54:25 UTC |
	|         | ip                                     |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:57:09 UTC | Tue, 17 Aug 2021 01:57:10 UTC |
	|         | addons disable registry                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:57:10 UTC | Tue, 17 Aug 2021 01:57:11 UTC |
	|         | logs -n 25                             |                                        |         |         |                               |                               |
	| -p      | addons-20210817015042-1554185          | addons-20210817015042-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 01:57:21 UTC | Tue, 17 Aug 2021 01:57:48 UTC |
	|         | addons disable gcp-auth                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 01:50:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 01:50:43.004283 1554672 out.go:298] Setting OutFile to fd 1 ...
	I0817 01:50:43.004408 1554672 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:50:43.004431 1554672 out.go:311] Setting ErrFile to fd 2...
	I0817 01:50:43.004441 1554672 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:50:43.004581 1554672 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 01:50:43.004871 1554672 out.go:305] Setting JSON to false
	I0817 01:50:43.005775 1554672 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34381,"bootTime":1629130662,"procs":416,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 01:50:43.005843 1554672 start.go:121] virtualization:  
	I0817 01:50:43.008113 1554672 out.go:177] * [addons-20210817015042-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 01:50:43.010059 1554672 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 01:50:43.009081 1554672 notify.go:169] Checking for updates...
	I0817 01:50:43.011571 1554672 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 01:50:43.013130 1554672 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 01:50:43.014848 1554672 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 01:50:43.015025 1554672 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 01:50:43.049197 1554672 docker.go:132] docker version: linux-20.10.8
	I0817 01:50:43.049279 1554672 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:50:43.144133 1554672 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:50:43.088038469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:50:43.144227 1554672 docker.go:244] overlay module found
	I0817 01:50:43.146324 1554672 out.go:177] * Using the docker driver based on user configuration
	I0817 01:50:43.146348 1554672 start.go:278] selected driver: docker
	I0817 01:50:43.146353 1554672 start.go:751] validating driver "docker" against <nil>
	I0817 01:50:43.146367 1554672 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 01:50:43.146408 1554672 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 01:50:43.146423 1554672 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 01:50:43.147842 1554672 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 01:50:43.148132 1554672 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:50:43.222251 1554672 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:50:43.17341921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:50:43.222365 1554672 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0817 01:50:43.222521 1554672 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 01:50:43.222542 1554672 cni.go:93] Creating CNI manager for ""
	I0817 01:50:43.222549 1554672 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:50:43.222565 1554672 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 01:50:43.222570 1554672 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 01:50:43.222582 1554672 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 01:50:43.222589 1554672 start_flags.go:277] config:
	{Name:addons-20210817015042-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 01:50:43.224429 1554672 out.go:177] * Starting control plane node addons-20210817015042-1554185 in cluster addons-20210817015042-1554185
	I0817 01:50:43.224467 1554672 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 01:50:43.226166 1554672 out.go:177] * Pulling base image ...
	I0817 01:50:43.226186 1554672 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:50:43.226218 1554672 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 01:50:43.226230 1554672 cache.go:56] Caching tarball of preloaded images
	I0817 01:50:43.226359 1554672 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 01:50:43.226380 1554672 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 01:50:43.226662 1554672 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/config.json ...
	I0817 01:50:43.226688 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/config.json: {Name:mk832a7647425177a5f2be8874629457bb58883b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:50:43.226846 1554672 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 01:50:43.267020 1554672 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 01:50:43.267048 1554672 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 01:50:43.267060 1554672 cache.go:205] Successfully downloaded all kic artifacts
	I0817 01:50:43.267095 1554672 start.go:313] acquiring machines lock for addons-20210817015042-1554185: {Name:mkc848aa47e63f497fa6d048b39bc33e9d106216 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 01:50:43.267208 1554672 start.go:317] acquired machines lock for "addons-20210817015042-1554185" in 92.061µs
	I0817 01:50:43.267235 1554672 start.go:89] Provisioning new machine with config: &{Name:addons-20210817015042-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 01:50:43.267309 1554672 start.go:126] createHost starting for "" (driver="docker")
	I0817 01:50:43.269344 1554672 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0817 01:50:43.269558 1554672 start.go:160] libmachine.API.Create for "addons-20210817015042-1554185" (driver="docker")
	I0817 01:50:43.269585 1554672 client.go:168] LocalClient.Create starting
	I0817 01:50:43.269667 1554672 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0817 01:50:43.834992 1554672 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0817 01:50:44.271080 1554672 cli_runner.go:115] Run: docker network inspect addons-20210817015042-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 01:50:44.298072 1554672 cli_runner.go:162] docker network inspect addons-20210817015042-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 01:50:44.298133 1554672 network_create.go:255] running [docker network inspect addons-20210817015042-1554185] to gather additional debugging logs...
	I0817 01:50:44.298149 1554672 cli_runner.go:115] Run: docker network inspect addons-20210817015042-1554185
	W0817 01:50:44.324372 1554672 cli_runner.go:162] docker network inspect addons-20210817015042-1554185 returned with exit code 1
	I0817 01:50:44.324396 1554672 network_create.go:258] error running [docker network inspect addons-20210817015042-1554185]: docker network inspect addons-20210817015042-1554185: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210817015042-1554185
	I0817 01:50:44.324409 1554672 network_create.go:260] output of [docker network inspect addons-20210817015042-1554185]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210817015042-1554185
	
	** /stderr **
	I0817 01:50:44.324473 1554672 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 01:50:44.351093 1554672 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x40005be280] misses:0}
	I0817 01:50:44.351140 1554672 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 01:50:44.351162 1554672 network_create.go:106] attempt to create docker network addons-20210817015042-1554185 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 01:50:44.351211 1554672 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210817015042-1554185
	I0817 01:50:44.413803 1554672 network_create.go:90] docker network addons-20210817015042-1554185 192.168.49.0/24 created
	I0817 01:50:44.413829 1554672 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210817015042-1554185" container
	I0817 01:50:44.413892 1554672 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0817 01:50:44.440106 1554672 cli_runner.go:115] Run: docker volume create addons-20210817015042-1554185 --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --label created_by.minikube.sigs.k8s.io=true
	I0817 01:50:44.467518 1554672 oci.go:102] Successfully created a docker volume addons-20210817015042-1554185
	I0817 01:50:44.467581 1554672 cli_runner.go:115] Run: docker run --rm --name addons-20210817015042-1554185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --entrypoint /usr/bin/test -v addons-20210817015042-1554185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0817 01:50:48.841251 1554672 cli_runner.go:168] Completed: docker run --rm --name addons-20210817015042-1554185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --entrypoint /usr/bin/test -v addons-20210817015042-1554185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (4.373634594s)
	I0817 01:50:48.841276 1554672 oci.go:106] Successfully prepared a docker volume addons-20210817015042-1554185
	W0817 01:50:48.841301 1554672 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0817 01:50:48.841310 1554672 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0817 01:50:48.841360 1554672 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 01:50:48.841549 1554672 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:50:48.841570 1554672 kic.go:179] Starting extracting preloaded images to volume ...
	I0817 01:50:48.841627 1554672 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210817015042-1554185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 01:50:48.971581 1554672 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210817015042-1554185 --name addons-20210817015042-1554185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210817015042-1554185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210817015042-1554185 --network addons-20210817015042-1554185 --ip 192.168.49.2 --volume addons-20210817015042-1554185:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0817 01:50:49.523596 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Running}}
	I0817 01:50:49.590786 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:50:49.633119 1554672 cli_runner.go:115] Run: docker exec addons-20210817015042-1554185 stat /var/lib/dpkg/alternatives/iptables
	I0817 01:50:49.741896 1554672 oci.go:278] the created container "addons-20210817015042-1554185" has a running status.
	I0817 01:50:49.741921 1554672 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa...
	I0817 01:50:50.532064 1554672 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 01:50:50.667778 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:50:50.707368 1554672 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 01:50:50.707384 1554672 kic_runner.go:115] Args: [docker exec --privileged addons-20210817015042-1554185 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 01:51:00.466206 1554672 kic_runner.go:124] Done: [docker exec --privileged addons-20210817015042-1554185 chown docker:docker /home/docker/.ssh/authorized_keys]: (9.758798263s)
	I0817 01:51:02.783214 1554672 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210817015042-1554185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (13.941553277s)
	I0817 01:51:02.783245 1554672 kic.go:188] duration metric: took 13.941672 seconds to extract preloaded images to volume
	I0817 01:51:02.783324 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:02.814748 1554672 machine.go:88] provisioning docker machine ...
	I0817 01:51:02.814781 1554672 ubuntu.go:169] provisioning hostname "addons-20210817015042-1554185"
	I0817 01:51:02.814865 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:02.842333 1554672 main.go:130] libmachine: Using SSH client type: native
	I0817 01:51:02.842498 1554672 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50314 <nil> <nil>}
	I0817 01:51:02.842516 1554672 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210817015042-1554185 && echo "addons-20210817015042-1554185" | sudo tee /etc/hostname
	I0817 01:51:02.970606 1554672 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210817015042-1554185
	
	I0817 01:51:02.970693 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:02.999373 1554672 main.go:130] libmachine: Using SSH client type: native
	I0817 01:51:02.999533 1554672 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50314 <nil> <nil>}
	I0817 01:51:02.999560 1554672 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210817015042-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210817015042-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210817015042-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 01:51:03.114034 1554672 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 01:51:03.114055 1554672 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 01:51:03.114074 1554672 ubuntu.go:177] setting up certificates
	I0817 01:51:03.114082 1554672 provision.go:83] configureAuth start
	I0817 01:51:03.114135 1554672 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210817015042-1554185
	I0817 01:51:03.141579 1554672 provision.go:138] copyHostCerts
	I0817 01:51:03.141653 1554672 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 01:51:03.141736 1554672 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 01:51:03.141784 1554672 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 01:51:03.141822 1554672 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.addons-20210817015042-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210817015042-1554185]
	I0817 01:51:03.398920 1554672 provision.go:172] copyRemoteCerts
	I0817 01:51:03.398968 1554672 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 01:51:03.399007 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.426820 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.508566 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 01:51:03.525114 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0817 01:51:03.539071 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 01:51:03.553109 1554672 provision.go:86] duration metric: configureAuth took 439.012307ms
	I0817 01:51:03.553124 1554672 ubuntu.go:193] setting minikube options for container-runtime
	I0817 01:51:03.553268 1554672 config.go:177] Loaded profile config "addons-20210817015042-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 01:51:03.553275 1554672 machine.go:91] provisioned docker machine in 738.505134ms
	I0817 01:51:03.553280 1554672 client.go:171] LocalClient.Create took 20.283690224s
	I0817 01:51:03.553289 1554672 start.go:168] duration metric: libmachine.API.Create for "addons-20210817015042-1554185" took 20.283731225s
	I0817 01:51:03.553296 1554672 start.go:267] post-start starting for "addons-20210817015042-1554185" (driver="docker")
	I0817 01:51:03.553301 1554672 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 01:51:03.553340 1554672 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 01:51:03.553372 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.581866 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.664711 1554672 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 01:51:03.667021 1554672 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 01:51:03.667044 1554672 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 01:51:03.667055 1554672 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 01:51:03.667073 1554672 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 01:51:03.667081 1554672 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 01:51:03.667131 1554672 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 01:51:03.667155 1554672 start.go:270] post-start completed in 113.85344ms
	I0817 01:51:03.667437 1554672 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210817015042-1554185
	I0817 01:51:03.695177 1554672 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/config.json ...
	I0817 01:51:03.695366 1554672 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 01:51:03.695414 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.722965 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.802744 1554672 start.go:129] duration metric: createHost completed in 20.535424588s
	I0817 01:51:03.802761 1554672 start.go:80] releasing machines lock for "addons-20210817015042-1554185", held for 20.535539837s
	I0817 01:51:03.802834 1554672 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210817015042-1554185
	I0817 01:51:03.830388 1554672 ssh_runner.go:149] Run: systemctl --version
	I0817 01:51:03.830437 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.830658 1554672 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 01:51:03.830713 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:03.864441 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.872939 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:03.950680 1554672 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 01:51:04.148514 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 01:51:04.156921 1554672 docker.go:153] disabling docker service ...
	I0817 01:51:04.156964 1554672 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 01:51:04.172287 1554672 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 01:51:04.180567 1554672 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 01:51:04.253873 1554672 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 01:51:04.337794 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 01:51:04.346079 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 01:51:04.356986 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 01:51:04.369213 1554672 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 01:51:04.375739 1554672 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 01:51:04.381264 1554672 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 01:51:04.455762 1554672 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 01:51:04.531663 1554672 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 01:51:04.531729 1554672 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 01:51:04.535130 1554672 start.go:413] Will wait 60s for crictl version
	I0817 01:51:04.535189 1554672 ssh_runner.go:149] Run: sudo crictl version
	I0817 01:51:04.564551 1554672 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T01:51:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 01:51:15.611398 1554672 ssh_runner.go:149] Run: sudo crictl version
	I0817 01:51:15.634965 1554672 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 01:51:15.635034 1554672 ssh_runner.go:149] Run: containerd --version
	I0817 01:51:15.656211 1554672 ssh_runner.go:149] Run: containerd --version
	I0817 01:51:15.679165 1554672 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 01:51:15.679262 1554672 cli_runner.go:115] Run: docker network inspect addons-20210817015042-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 01:51:15.708112 1554672 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 01:51:15.711074 1554672 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 01:51:15.720057 1554672 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:51:15.720115 1554672 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 01:51:15.753630 1554672 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 01:51:15.753654 1554672 containerd.go:517] Images already preloaded, skipping extraction
	I0817 01:51:15.753696 1554672 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 01:51:15.775284 1554672 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 01:51:15.775306 1554672 cache_images.go:74] Images are preloaded, skipping loading
	I0817 01:51:15.775376 1554672 ssh_runner.go:149] Run: sudo crictl info
	I0817 01:51:15.796264 1554672 cni.go:93] Creating CNI manager for ""
	I0817 01:51:15.796286 1554672 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:51:15.796297 1554672 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 01:51:15.796310 1554672 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210817015042-1554185 NodeName:addons-20210817015042-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 01:51:15.796446 1554672 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "addons-20210817015042-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 01:51:15.796533 1554672 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-20210817015042-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 01:51:15.796591 1554672 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 01:51:15.802721 1554672 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 01:51:15.802788 1554672 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 01:51:15.808456 1554672 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (574 bytes)
	I0817 01:51:15.819782 1554672 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 01:51:15.830993 1554672 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0817 01:51:15.841895 1554672 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 01:51:15.844431 1554672 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 01:51:15.852834 1554672 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185 for IP: 192.168.49.2
	I0817 01:51:15.852892 1554672 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 01:51:16.232897 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt ...
	I0817 01:51:16.232924 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt: {Name:mkc452a3ca463d1cef7aa1398b1abd9dddd24545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.233112 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key ...
	I0817 01:51:16.233129 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key: {Name:mkb1c0cc6e35e952c8fa312da56d58ae26957187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.233218 1554672 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 01:51:16.929155 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt ...
	I0817 01:51:16.929187 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt: {Name:mk17a5a660a62b953e570d93eac621069f930efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.929368 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key ...
	I0817 01:51:16.929384 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key: {Name:mk40bf80fb6d166c627fea37bd45ce901649a411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:16.929516 1554672 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.key
	I0817 01:51:16.929537 1554672 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt with IP's: []
	I0817 01:51:17.141841 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt ...
	I0817 01:51:17.141869 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: {Name:mk127978c85cd8b22e7e4466afd86c3104950f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.142041 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.key ...
	I0817 01:51:17.142056 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.key: {Name:mk9c80a73b58e8a5fc9e3f4aca38da7b4d098319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.142143 1554672 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2
	I0817 01:51:17.142152 1554672 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 01:51:17.697755 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2 ...
	I0817 01:51:17.697786 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2: {Name:mk68739490e6778fecd80380c013c3c92d6d4458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.698773 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2 ...
	I0817 01:51:17.698790 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2: {Name:mk844eac0cbe48c9235e9d8a8ec3aa0d9a836734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:17.698894 1554672 certs.go:308] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt
	I0817 01:51:17.698954 1554672 certs.go:312] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key
	I0817 01:51:17.699002 1554672 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key
	I0817 01:51:17.699012 1554672 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt with IP's: []
	I0817 01:51:18.551109 1554672 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt ...
	I0817 01:51:18.551144 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt: {Name:mkb606f4652991a4936ad1fb4f336e911d7af05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:18.551327 1554672 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key ...
	I0817 01:51:18.551342 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key: {Name:mkddf28d3df3bc53b2858cabdc2cbc08941228fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:18.551516 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 01:51:18.551557 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 01:51:18.551586 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 01:51:18.551613 1554672 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 01:51:18.554164 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 01:51:18.569715 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 01:51:18.584425 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 01:51:18.598873 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 01:51:18.613294 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 01:51:18.627638 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 01:51:18.642450 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 01:51:18.657110 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 01:51:18.671462 1554672 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 01:51:18.686137 1554672 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 01:51:18.696974 1554672 ssh_runner.go:149] Run: openssl version
	I0817 01:51:18.701232 1554672 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 01:51:18.707560 1554672 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 01:51:18.710430 1554672 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 01:51:18.710492 1554672 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 01:51:18.714912 1554672 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 01:51:18.721735 1554672 kubeadm.go:390] StartCluster: {Name:addons-20210817015042-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210817015042-1554185 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 01:51:18.721819 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 01:51:18.721874 1554672 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 01:51:18.749686 1554672 cri.go:76] found id: ""
	I0817 01:51:18.749758 1554672 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 01:51:18.755843 1554672 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 01:51:18.761633 1554672 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 01:51:18.761681 1554672 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 01:51:18.767360 1554672 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 01:51:18.767403 1554672 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 01:51:44.489308 1554672 out.go:204]   - Generating certificates and keys ...
	I0817 01:51:44.492258 1554672 out.go:204]   - Booting up control plane ...
	I0817 01:51:44.495405 1554672 out.go:204]   - Configuring RBAC rules ...
	I0817 01:51:44.497771 1554672 cni.go:93] Creating CNI manager for ""
	I0817 01:51:44.497802 1554672 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:51:44.499744 1554672 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 01:51:44.499923 1554672 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 01:51:44.514536 1554672 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 01:51:44.514555 1554672 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 01:51:44.537490 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 01:51:45.293207 1554672 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 01:51:45.293283 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:45.293354 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=addons-20210817015042-1554185 minikube.k8s.io/updated_at=2021_08_17T01_51_45_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:45.441142 1554672 ops.go:34] apiserver oom_adj: -16
	I0817 01:51:45.441307 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:46.028917 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:46.528512 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:47.028526 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:47.529129 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:48.028453 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:48.529207 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:49.029151 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:49.528902 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:50.028980 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:50.528509 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:51.028493 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:51.528957 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:52.028487 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:52.529123 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:53.029078 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:53.528513 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:54.029046 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:54.529488 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:55.029473 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:55.529461 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:56.029173 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:56.529368 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:57.028522 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:57.528583 1554672 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 01:51:57.667234 1554672 kubeadm.go:985] duration metric: took 12.373989422s to wait for elevateKubeSystemPrivileges.
	I0817 01:51:57.667260 1554672 kubeadm.go:392] StartCluster complete in 38.945530358s
	I0817 01:51:57.667277 1554672 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:57.667387 1554672 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 01:51:57.667820 1554672 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 01:51:58.208648 1554672 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210817015042-1554185" rescaled to 1
	I0817 01:51:58.208753 1554672 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 01:51:58.211352 1554672 out.go:177] * Verifying Kubernetes components...
	I0817 01:51:58.211399 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 01:51:58.208813 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 01:51:58.208883 1554672 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I0817 01:51:58.211533 1554672 addons.go:59] Setting volumesnapshots=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.211549 1554672 addons.go:135] Setting addon volumesnapshots=true in "addons-20210817015042-1554185"
	I0817 01:51:58.211576 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.212086 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.212136 1554672 addons.go:59] Setting ingress=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.212151 1554672 addons.go:135] Setting addon ingress=true in "addons-20210817015042-1554185"
	I0817 01:51:58.212176 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.212566 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.212695 1554672 addons.go:59] Setting metrics-server=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.212709 1554672 addons.go:135] Setting addon metrics-server=true in "addons-20210817015042-1554185"
	I0817 01:51:58.212725 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.213112 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.213170 1554672 addons.go:59] Setting olm=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.213183 1554672 addons.go:135] Setting addon olm=true in "addons-20210817015042-1554185"
	I0817 01:51:58.213199 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.213584 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.213633 1554672 addons.go:59] Setting registry=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.213647 1554672 addons.go:135] Setting addon registry=true in "addons-20210817015042-1554185"
	I0817 01:51:58.213662 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.214028 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.214076 1554672 addons.go:59] Setting storage-provisioner=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.214086 1554672 addons.go:135] Setting addon storage-provisioner=true in "addons-20210817015042-1554185"
	W0817 01:51:58.214091 1554672 addons.go:147] addon storage-provisioner should already be in state true
	I0817 01:51:58.214110 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.214476 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.214533 1554672 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.214556 1554672 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210817015042-1554185"
	I0817 01:51:58.214578 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.216636 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.209046 1554672 config.go:177] Loaded profile config "addons-20210817015042-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 01:51:58.236087 1554672 addons.go:59] Setting default-storageclass=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.236110 1554672 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210817015042-1554185"
	I0817 01:51:58.236416 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.236497 1554672 addons.go:59] Setting gcp-auth=true in profile "addons-20210817015042-1554185"
	I0817 01:51:58.236511 1554672 mustload.go:65] Loading cluster: addons-20210817015042-1554185
	I0817 01:51:58.236645 1554672 config.go:177] Loaded profile config "addons-20210817015042-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 01:51:58.236850 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.386147 1554672 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0817 01:51:58.387928 1554672 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0817 01:51:58.390873 1554672 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0817 01:51:58.390923 1554672 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0817 01:51:58.390932 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0817 01:51:58.390988 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.537862 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0817 01:51:58.539571 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0817 01:51:58.541491 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0817 01:51:58.545257 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0817 01:51:58.547105 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0817 01:51:58.548521 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0817 01:51:58.548575 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0817 01:51:58.548588 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0817 01:51:58.550068 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0817 01:51:58.548640 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.554947 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0817 01:51:58.556578 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0817 01:51:58.558141 1554672 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0817 01:51:58.558186 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0817 01:51:58.558193 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0817 01:51:58.558233 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.584190 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:58.586499 1554672 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0817 01:51:58.586556 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 01:51:58.586564 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 01:51:58.586607 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.586985 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 01:51:58.588594 1554672 node_ready.go:35] waiting up to 6m0s for node "addons-20210817015042-1554185" to be "Ready" ...
	I0817 01:51:58.646058 1554672 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0817 01:51:58.647847 1554672 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0817 01:51:58.697505 1554672 out.go:177]   - Using image registry:2.7.1
	I0817 01:51:58.699108 1554672 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0817 01:51:58.699188 1554672 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0817 01:51:58.699196 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0817 01:51:58.699248 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.738566 1554672 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 01:51:58.738649 1554672 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 01:51:58.738662 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 01:51:58.738710 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.776023 1554672 addons.go:135] Setting addon default-storageclass=true in "addons-20210817015042-1554185"
	W0817 01:51:58.776048 1554672 addons.go:147] addon default-storageclass should already be in state true
	I0817 01:51:58.776074 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.776526 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:58.805011 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:58.875835 1554672 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0817 01:51:58.875902 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0817 01:51:58.876004 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.903641 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:58.916757 1554672 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0817 01:51:58.916831 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:58.922593 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:58.927465 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.028544 1554672 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 01:51:59.028564 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 01:51:59.028615 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:59.050931 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.052785 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.078548 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.103856 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.136455 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.164633 1554672 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0817 01:51:59.164654 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0817 01:51:59.294730 1554672 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0817 01:51:59.295256 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0817 01:51:59.334238 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0817 01:51:59.362229 1554672 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0817 01:51:59.362285 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0817 01:51:59.419723 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 01:51:59.419773 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0817 01:51:59.430396 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 01:51:59.439975 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0817 01:51:59.440022 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0817 01:51:59.457816 1554672 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0817 01:51:59.457862 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0817 01:51:59.478937 1554672 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0817 01:51:59.484531 1554672 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0817 01:51:59.484544 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0817 01:51:59.492438 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 01:51:59.516776 1554672 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0817 01:51:59.516819 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0817 01:51:59.533786 1554672 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0817 01:51:59.533830 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0817 01:51:59.538721 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 01:51:59.538765 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 01:51:59.551896 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0817 01:51:59.551933 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0817 01:51:59.577593 1554672 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0817 01:51:59.577637 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0817 01:51:59.586368 1554672 addons.go:135] Setting addon gcp-auth=true in "addons-20210817015042-1554185"
	I0817 01:51:59.586439 1554672 host.go:66] Checking if "addons-20210817015042-1554185" exists ...
	I0817 01:51:59.586992 1554672 cli_runner.go:115] Run: docker container inspect addons-20210817015042-1554185 --format={{.State.Status}}
	I0817 01:51:59.602216 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0817 01:51:59.643183 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0817 01:51:59.643200 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0817 01:51:59.643758 1554672 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 01:51:59.643772 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 01:51:59.653214 1554672 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.6
	I0817 01:51:59.654750 1554672 out.go:177]   - Using image jettech/kube-webhook-certgen:v1.3.0
	I0817 01:51:59.654796 1554672 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0817 01:51:59.654803 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0817 01:51:59.654918 1554672 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210817015042-1554185
	I0817 01:51:59.662952 1554672 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.075944471s)
	I0817 01:51:59.662971 1554672 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0817 01:51:59.675174 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0817 01:51:59.683436 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0817 01:51:59.683451 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0817 01:51:59.698631 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0817 01:51:59.698646 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0817 01:51:59.718898 1554672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50314 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/addons-20210817015042-1554185/id_rsa Username:docker}
	I0817 01:51:59.744738 1554672 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 01:51:59.744759 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0817 01:51:59.753993 1554672 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0817 01:51:59.754011 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0817 01:51:59.768957 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 01:51:59.806478 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0817 01:51:59.806499 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0817 01:51:59.841542 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 01:51:59.896907 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0817 01:51:59.896928 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0817 01:52:00.017093 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0817 01:52:00.017114 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0817 01:52:00.098144 1554672 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0817 01:52:00.098165 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (770 bytes)
	I0817 01:52:00.148128 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0817 01:52:00.148150 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0817 01:52:00.212978 1554672 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 01:52:00.213000 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4755 bytes)
	I0817 01:52:00.240295 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0817 01:52:00.240316 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0817 01:52:00.328162 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 01:52:00.392480 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0817 01:52:00.392504 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0817 01:52:00.475797 1554672 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 01:52:00.475819 1554672 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0817 01:52:00.588665 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 01:52:00.613870 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:01.300912 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.870464987s)
	I0817 01:52:01.300955 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (1.966695194s)
	I0817 01:52:01.300964 1554672 addons.go:313] Verifying addon ingress=true in "addons-20210817015042-1554185"
	I0817 01:52:01.302793 1554672 out.go:177] * Verifying ingress addon...
	I0817 01:52:01.301217 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.808764624s)
	I0817 01:52:01.304580 1554672 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0817 01:52:01.324823 1554672 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0817 01:52:01.324869 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:01.866150 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:02.455106 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:02.756616 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:02.904020 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:03.374970 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:03.900784 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:04.389210 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:04.828604 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:05.113059 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:05.328501 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:05.828619 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:06.329237 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:06.849401 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.17419898s)
	I0817 01:52:06.849430 1554672 addons.go:313] Verifying addon registry=true in "addons-20210817015042-1554185"
	I0817 01:52:06.851610 1554672 out.go:177] * Verifying registry addon...
	I0817 01:52:06.849712 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.080729919s)
	I0817 01:52:06.849894 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (7.247660472s)
	I0817 01:52:06.850006 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.008434003s)
	I0817 01:52:06.850075 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (6.521890577s)
	I0817 01:52:06.853580 1554672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0817 01:52:06.853656 1554672 addons.go:313] Verifying addon metrics-server=true in "addons-20210817015042-1554185"
	W0817 01:52:06.853699 1554672 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0817 01:52:06.853907 1554672 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	W0817 01:52:06.853735 1554672 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0817 01:52:06.853958 1554672 retry.go:31] will retry after 291.140013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0817 01:52:06.853773 1554672 addons.go:313] Verifying addon gcp-auth=true in "addons-20210817015042-1554185"
	I0817 01:52:06.856170 1554672 out.go:177] * Verifying gcp-auth addon...
	I0817 01:52:06.858037 1554672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0817 01:52:06.879437 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:06.901505 1554672 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0817 01:52:06.901521 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:06.902116 1554672 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0817 01:52:06.902127 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:07.114493 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:07.145764 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 01:52:07.214608 1554672 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0817 01:52:07.318415 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.729707072s)
	I0817 01:52:07.318482 1554672 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210817015042-1554185"
	I0817 01:52:07.320343 1554672 out.go:177] * Verifying csi-hostpath-driver addon...
	I0817 01:52:07.322240 1554672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0817 01:52:07.329026 1554672 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0817 01:52:07.329072 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:07.329707 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:07.406785 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:07.407051 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:07.829299 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:07.833611 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:07.905862 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:07.906518 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:08.243240 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.097293811s)
	I0817 01:52:08.329812 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:08.338224 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:08.405779 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:08.407978 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:08.530852 1554672 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (1.316180136s)
	I0817 01:52:08.829006 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:08.834034 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:08.905993 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:08.906433 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:09.328255 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:09.333785 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:09.405657 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:09.405914 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:09.613886 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:09.829205 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:09.832931 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:09.905035 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:09.905962 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:10.328643 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:10.333042 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:10.404941 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:10.405901 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:10.829248 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:10.833275 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:10.905773 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:10.906291 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:11.328954 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:11.333012 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:11.409301 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:11.410066 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:11.614143 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:11.828872 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:11.833797 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:11.904929 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:11.905665 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:12.328367 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:12.333086 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:12.405384 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:12.405823 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:12.829376 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:12.833255 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:12.905024 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:12.905295 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:13.330689 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:13.338216 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:13.404972 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:13.406085 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:13.829177 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:13.832929 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:13.904662 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:13.905242 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:14.113342 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:14.328450 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:14.333455 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:14.404940 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:14.405321 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:14.827779 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:14.832993 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:14.905252 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:14.905259 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:15.328264 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:15.332934 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:15.404658 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:15.405224 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:15.828486 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:15.833605 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:15.904727 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:15.905383 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:16.328197 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:16.332914 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:16.405192 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:16.405977 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:16.613508 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:16.828234 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:16.833383 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:16.904446 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:16.905357 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:17.327749 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:17.337646 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:17.404755 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:17.405248 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:17.827645 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:17.832968 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:17.904120 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:17.905322 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:18.328032 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:18.332346 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:18.405262 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:18.405850 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:18.828047 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:18.833667 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:18.906070 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:18.906612 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:19.112711 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:19.327808 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:19.332949 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:19.404756 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:19.404964 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:19.828001 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:19.833437 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:19.904449 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:19.904977 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:20.327656 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:20.333295 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:20.404715 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:20.405667 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:20.828390 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:20.833214 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:20.905458 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:20.905671 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:21.328312 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:21.333138 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:21.405037 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:21.406170 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:21.612764 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:21.944477 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:21.946682 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:21.947605 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:21.947754 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:22.328433 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:22.333541 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:22.404285 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:22.405669 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:22.827511 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:22.833159 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:22.905254 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:22.905581 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:23.328750 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:23.333436 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:23.404313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:23.405077 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:23.613578 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:23.828253 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:23.832694 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:23.904993 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:23.905761 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:24.328880 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:24.333313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:24.404520 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:24.404733 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:24.828322 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:24.833601 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:24.905217 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:24.905274 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:25.330911 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:25.337306 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:25.404857 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:25.405921 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:25.832639 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:25.835193 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:25.905020 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:25.905738 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:26.112693 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:26.327937 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:26.333091 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:26.405361 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:26.405698 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:26.828674 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:26.833006 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:26.905177 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:26.906093 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:27.337144 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:27.338231 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:27.406265 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:27.406265 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:27.828866 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:27.833010 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:27.904570 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:27.905457 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:28.112963 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:28.328408 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:28.333808 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:28.404888 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:28.405625 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:28.828928 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:28.833221 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:28.905969 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:28.906240 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:29.330142 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:29.334291 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:29.404551 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:29.405831 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:29.837438 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:29.838402 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:29.905810 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:29.905987 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:30.113285 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:30.328348 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:30.332925 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:30.405080 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:30.405351 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:30.828025 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:30.832792 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:30.905180 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:30.905627 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:31.328284 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:31.333115 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:31.405329 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:31.406706 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:31.828629 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:31.833620 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:31.908890 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:31.911347 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:32.113408 1554672 node_ready.go:58] node "addons-20210817015042-1554185" has status "Ready":"False"
	I0817 01:52:32.328824 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:32.333028 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:32.404805 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:32.405808 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:32.829223 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:32.833077 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:32.905936 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:32.906733 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:33.113600 1554672 node_ready.go:49] node "addons-20210817015042-1554185" has status "Ready":"True"
	I0817 01:52:33.113625 1554672 node_ready.go:38] duration metric: took 34.525011363s waiting for node "addons-20210817015042-1554185" to be "Ready" ...
	I0817 01:52:33.113634 1554672 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 01:52:33.122258 1554672 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:33.328105 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:33.333112 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:33.405131 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:33.406483 1554672 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0817 01:52:33.406499 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:33.828753 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:33.833308 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:33.905785 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:33.906578 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:34.328900 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:34.333293 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:34.405074 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:34.405422 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:34.829036 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:34.844069 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:34.907082 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:34.907261 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:35.133323 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:35.329658 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:35.340946 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:35.406005 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:35.406344 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:35.828964 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:35.836081 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:35.905164 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:35.905926 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:36.328635 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:36.333208 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:36.406327 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:36.406693 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:36.828912 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:36.834669 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:36.906233 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:36.907548 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:37.328276 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:37.333065 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:37.443517 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:37.443853 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:37.633378 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:37.829201 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:37.833434 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:37.906518 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:37.906857 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:38.329240 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:38.333317 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:38.408662 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:38.409011 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:38.828315 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:38.837240 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:38.904802 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:38.906525 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:39.329255 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:39.346113 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:39.413436 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:39.413760 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:39.634418 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:39.828371 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:39.833885 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:39.905904 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:39.906262 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:40.328884 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:40.333598 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:40.405309 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:40.407193 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:40.828938 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:40.833697 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:40.905855 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:40.906245 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:41.329054 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:41.334180 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:41.404918 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:41.406327 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:41.828350 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:41.833549 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:41.905158 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:41.905842 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:42.193681 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:51:57 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:42.328599 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:42.335515 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:42.405022 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:42.405819 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:42.828942 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:42.833740 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:42.905762 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:42.905954 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:43.328334 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:43.333885 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:43.415938 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:43.416337 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:43.829129 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:43.839083 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:43.905165 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:43.905905 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:44.328646 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:44.333389 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:44.404851 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:44.406163 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:44.634366 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:52:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:44.828620 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:44.833682 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:44.905712 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:44.909482 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:45.328143 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:45.332944 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:45.406611 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:45.407338 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:45.828634 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:45.832978 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:45.904648 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:45.905363 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:46.328711 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:46.333910 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:46.405839 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:46.406763 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:46.635221 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-17 01:52:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0817 01:52:46.828340 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:46.833989 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:46.905252 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:46.906215 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:47.328332 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:47.333832 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:47.405675 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:47.407973 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:47.827969 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:47.833906 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:47.907574 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:47.912357 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:48.328701 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:48.333127 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:48.406135 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:48.406524 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:48.636787 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:48.828308 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:48.833330 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:48.906467 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:48.906683 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:49.328422 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:49.333598 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:49.405055 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:49.405237 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:49.828698 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:49.833563 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:49.905439 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:49.905671 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:50.329000 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:50.335089 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:50.406085 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:50.407525 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:50.829299 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:50.833324 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:50.906506 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:50.906956 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:51.134950 1554672 pod_ready.go:102] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:51.327885 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:51.333409 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:51.405287 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:51.406140 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:51.829287 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:51.834079 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:51.905595 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:51.906917 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:52.136668 1554672 pod_ready.go:92] pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.136691 1554672 pod_ready.go:81] duration metric: took 19.014386562s waiting for pod "coredns-558bd4d5db-sxct6" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.136717 1554672 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.140525 1554672 pod_ready.go:92] pod "etcd-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.140545 1554672 pod_ready.go:81] duration metric: took 3.820392ms waiting for pod "etcd-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.140557 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.144374 1554672 pod_ready.go:92] pod "kube-apiserver-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.144391 1554672 pod_ready.go:81] duration metric: took 3.805ms waiting for pod "kube-apiserver-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.144400 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.147997 1554672 pod_ready.go:92] pod "kube-controller-manager-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.148018 1554672 pod_ready.go:81] duration metric: took 3.596018ms waiting for pod "kube-controller-manager-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.148027 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-88pjl" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.151630 1554672 pod_ready.go:92] pod "kube-proxy-88pjl" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.151645 1554672 pod_ready.go:81] duration metric: took 3.612895ms waiting for pod "kube-proxy-88pjl" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.151654 1554672 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.328964 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:52.333708 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:52.405187 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:52.406370 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:52.533532 1554672 pod_ready.go:92] pod "kube-scheduler-addons-20210817015042-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 01:52:52.533558 1554672 pod_ready.go:81] duration metric: took 381.895022ms waiting for pod "kube-scheduler-addons-20210817015042-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.533568 1554672 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace to be "Ready" ...
	I0817 01:52:52.829155 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:52.839844 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:52.905885 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:52.906272 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:53.346344 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:53.352796 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:53.409937 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:53.410482 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:53.834056 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:53.834773 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:53.907214 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:53.907172 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:54.331399 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:54.336335 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:54.407048 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:54.410847 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:54.829058 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:54.833883 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:54.905684 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:54.906829 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:54.944019 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:55.328849 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:55.333455 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:55.406435 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:55.408050 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:55.834184 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:55.836250 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:55.907784 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:55.908229 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:56.340402 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:56.341855 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:56.405913 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:56.406308 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:56.829718 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:56.840586 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:56.908288 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:56.908568 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:56.948818 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:57.328503 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:57.334462 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:57.406776 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:57.407190 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:57.828588 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:57.833847 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:57.905081 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:57.906429 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 01:52:58.329593 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:58.335086 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:58.405528 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:58.406555 1554672 kapi.go:108] duration metric: took 51.552974836s to wait for kubernetes.io/minikube-addons=registry ...
	I0817 01:52:58.829266 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:58.833517 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:58.905974 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:59.342609 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:59.348252 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:59.405313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:52:59.444841 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:52:59.828685 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:52:59.833928 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:52:59.905309 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:00.328962 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:00.333845 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:00.405313 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:00.829039 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:00.834166 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:00.904823 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:01.328747 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:01.334336 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:01.404643 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:01.829758 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:01.835420 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:01.905318 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:01.945948 1554672 pod_ready.go:102] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"False"
	I0817 01:53:02.376424 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:02.377873 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:02.404990 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:02.828812 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:02.833383 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:02.904641 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 01:53:03.329032 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:03.337245 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:03.406764 1554672 kapi.go:108] duration metric: took 56.548723137s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0817 01:53:03.408669 1554672 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-20210817015042-1554185 cluster.
	I0817 01:53:03.410521 1554672 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0817 01:53:03.412326 1554672 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0817 01:53:03.448173 1554672 pod_ready.go:92] pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace has status "Ready":"True"
	I0817 01:53:03.448196 1554672 pod_ready.go:81] duration metric: took 10.914620384s waiting for pod "metrics-server-77c99ccb96-x8mh4" in "kube-system" namespace to be "Ready" ...
	I0817 01:53:03.448215 1554672 pod_ready.go:38] duration metric: took 30.334547327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 01:53:03.448235 1554672 api_server.go:50] waiting for apiserver process to appear ...
	I0817 01:53:03.448250 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 01:53:03.448304 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 01:53:03.564171 1554672 cri.go:76] found id: "eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:03.564232 1554672 cri.go:76] found id: ""
	I0817 01:53:03.564250 1554672 logs.go:270] 1 containers: [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8]
	I0817 01:53:03.564343 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.575403 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 01:53:03.575484 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 01:53:03.604432 1554672 cri.go:76] found id: "29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:03.604494 1554672 cri.go:76] found id: ""
	I0817 01:53:03.604513 1554672 logs.go:270] 1 containers: [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4]
	I0817 01:53:03.604561 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.607149 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 01:53:03.607215 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 01:53:03.632895 1554672 cri.go:76] found id: "13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:03.632908 1554672 cri.go:76] found id: ""
	I0817 01:53:03.632913 1554672 logs.go:270] 1 containers: [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012]
	I0817 01:53:03.632967 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.635372 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 01:53:03.635435 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 01:53:03.664635 1554672 cri.go:76] found id: "615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:03.664650 1554672 cri.go:76] found id: ""
	I0817 01:53:03.664655 1554672 logs.go:270] 1 containers: [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13]
	I0817 01:53:03.664689 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.667197 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 01:53:03.667270 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 01:53:03.691527 1554672 cri.go:76] found id: "0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:03.691545 1554672 cri.go:76] found id: ""
	I0817 01:53:03.691550 1554672 logs.go:270] 1 containers: [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c]
	I0817 01:53:03.691582 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.693995 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 01:53:03.694060 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 01:53:03.717435 1554672 cri.go:76] found id: ""
	I0817 01:53:03.717475 1554672 logs.go:270] 0 containers: []
	W0817 01:53:03.717489 1554672 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 01:53:03.717495 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 01:53:03.717533 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 01:53:03.741717 1554672 cri.go:76] found id: "783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:03.741734 1554672 cri.go:76] found id: ""
	I0817 01:53:03.741739 1554672 logs.go:270] 1 containers: [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892]
	I0817 01:53:03.741798 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.744804 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 01:53:03.744851 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 01:53:03.771775 1554672 cri.go:76] found id: "52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:03.771789 1554672 cri.go:76] found id: ""
	I0817 01:53:03.771794 1554672 logs.go:270] 1 containers: [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393]
	I0817 01:53:03.771831 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:03.774470 1554672 logs.go:123] Gathering logs for etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] ...
	I0817 01:53:03.774489 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:03.801776 1554672 logs.go:123] Gathering logs for coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] ...
	I0817 01:53:03.801798 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:03.837058 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:03.840579 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:03.843933 1554672 logs.go:123] Gathering logs for kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] ...
	I0817 01:53:03.843957 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:03.898510 1554672 logs.go:123] Gathering logs for kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] ...
	I0817 01:53:03.898538 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:03.952593 1554672 logs.go:123] Gathering logs for kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] ...
	I0817 01:53:03.952621 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:04.082990 1554672 logs.go:123] Gathering logs for containerd ...
	I0817 01:53:04.083052 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0817 01:53:04.223853 1554672 logs.go:123] Gathering logs for kubelet ...
	I0817 01:53:04.223887 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 01:53:04.331965 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:04.338761 1554672 logs.go:123] Gathering logs for dmesg ...
	I0817 01:53:04.340534 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 01:53:04.342392 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:04.357212 1554672 logs.go:123] Gathering logs for describe nodes ...
	I0817 01:53:04.357263 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 01:53:04.694598 1554672 logs.go:123] Gathering logs for kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] ...
	I0817 01:53:04.694717 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:04.828761 1554672 logs.go:123] Gathering logs for storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] ...
	I0817 01:53:04.828816 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:04.851348 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:04.852551 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:04.876644 1554672 logs.go:123] Gathering logs for container status ...
	I0817 01:53:04.876688 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 01:53:05.331362 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:05.343522 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:05.831960 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:05.841720 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:06.329544 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:06.334286 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:06.829369 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:06.833923 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:07.328774 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:07.334115 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:07.467368 1554672 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 01:53:07.487646 1554672 api_server.go:70] duration metric: took 1m9.278576044s to wait for apiserver process to appear ...
	I0817 01:53:07.487700 1554672 api_server.go:86] waiting for apiserver healthz status ...
	I0817 01:53:07.487733 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 01:53:07.487806 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 01:53:07.534592 1554672 cri.go:76] found id: "eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:07.534644 1554672 cri.go:76] found id: ""
	I0817 01:53:07.534661 1554672 logs.go:270] 1 containers: [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8]
	I0817 01:53:07.534726 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.538672 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 01:53:07.538745 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 01:53:07.572611 1554672 cri.go:76] found id: "29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:07.572657 1554672 cri.go:76] found id: ""
	I0817 01:53:07.572674 1554672 logs.go:270] 1 containers: [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4]
	I0817 01:53:07.572739 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.576722 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 01:53:07.576801 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 01:53:07.611541 1554672 cri.go:76] found id: "13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:07.611559 1554672 cri.go:76] found id: ""
	I0817 01:53:07.611564 1554672 logs.go:270] 1 containers: [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012]
	I0817 01:53:07.611627 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.614311 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 01:53:07.614389 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 01:53:07.641823 1554672 cri.go:76] found id: "615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:07.641859 1554672 cri.go:76] found id: ""
	I0817 01:53:07.641864 1554672 logs.go:270] 1 containers: [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13]
	I0817 01:53:07.641897 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.644712 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 01:53:07.644770 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 01:53:07.667773 1554672 cri.go:76] found id: "0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:07.667788 1554672 cri.go:76] found id: ""
	I0817 01:53:07.667793 1554672 logs.go:270] 1 containers: [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c]
	I0817 01:53:07.667831 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.670409 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 01:53:07.670478 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 01:53:07.695746 1554672 cri.go:76] found id: ""
	I0817 01:53:07.695763 1554672 logs.go:270] 0 containers: []
	W0817 01:53:07.695768 1554672 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 01:53:07.695784 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 01:53:07.695828 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 01:53:07.727549 1554672 cri.go:76] found id: "783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:07.727592 1554672 cri.go:76] found id: ""
	I0817 01:53:07.727608 1554672 logs.go:270] 1 containers: [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892]
	I0817 01:53:07.727672 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.731096 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 01:53:07.731168 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 01:53:07.758719 1554672 cri.go:76] found id: "52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:07.758734 1554672 cri.go:76] found id: ""
	I0817 01:53:07.758739 1554672 logs.go:270] 1 containers: [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393]
	I0817 01:53:07.758787 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:07.761946 1554672 logs.go:123] Gathering logs for kubelet ...
	I0817 01:53:07.761964 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 01:53:07.830586 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:07.834021 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:07.863604 1554672 logs.go:123] Gathering logs for coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] ...
	I0817 01:53:07.863626 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:07.887301 1554672 logs.go:123] Gathering logs for kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] ...
	I0817 01:53:07.887356 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:07.918171 1554672 logs.go:123] Gathering logs for containerd ...
	I0817 01:53:07.918195 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0817 01:53:08.012682 1554672 logs.go:123] Gathering logs for container status ...
	I0817 01:53:08.012712 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 01:53:08.059071 1554672 logs.go:123] Gathering logs for kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] ...
	I0817 01:53:08.059126 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:08.163276 1554672 logs.go:123] Gathering logs for dmesg ...
	I0817 01:53:08.163302 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 01:53:08.176772 1554672 logs.go:123] Gathering logs for describe nodes ...
	I0817 01:53:08.176790 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 01:53:08.330227 1554672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 01:53:08.344515 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:08.425430 1554672 logs.go:123] Gathering logs for kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] ...
	I0817 01:53:08.425453 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:08.486450 1554672 logs.go:123] Gathering logs for etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] ...
	I0817 01:53:08.486475 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:08.515454 1554672 logs.go:123] Gathering logs for kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] ...
	I0817 01:53:08.515475 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:08.542038 1554672 logs.go:123] Gathering logs for storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] ...
	I0817 01:53:08.542057 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:08.828741 1554672 kapi.go:108] duration metric: took 1m7.524156223s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0817 01:53:08.834977 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:09.335143 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:09.835186 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:10.335936 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:10.834892 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:11.068088 1554672 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 01:53:11.076771 1554672 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 01:53:11.077605 1554672 api_server.go:139] control plane version: v1.21.3
	I0817 01:53:11.077645 1554672 api_server.go:129] duration metric: took 3.589928004s to wait for apiserver health ...
	I0817 01:53:11.077667 1554672 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 01:53:11.077694 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 01:53:11.077770 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 01:53:11.134012 1554672 cri.go:76] found id: "eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:11.134030 1554672 cri.go:76] found id: ""
	I0817 01:53:11.134035 1554672 logs.go:270] 1 containers: [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8]
	I0817 01:53:11.134081 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.136813 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 01:53:11.136882 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 01:53:11.158746 1554672 cri.go:76] found id: "29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:11.158763 1554672 cri.go:76] found id: ""
	I0817 01:53:11.158768 1554672 logs.go:270] 1 containers: [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4]
	I0817 01:53:11.158868 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.161890 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 01:53:11.161955 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 01:53:11.185618 1554672 cri.go:76] found id: "13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:11.185638 1554672 cri.go:76] found id: ""
	I0817 01:53:11.185643 1554672 logs.go:270] 1 containers: [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012]
	I0817 01:53:11.185698 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.188273 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 01:53:11.188341 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 01:53:11.212061 1554672 cri.go:76] found id: "615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:11.212084 1554672 cri.go:76] found id: ""
	I0817 01:53:11.212104 1554672 logs.go:270] 1 containers: [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13]
	I0817 01:53:11.212154 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.214710 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 01:53:11.214777 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 01:53:11.254063 1554672 cri.go:76] found id: "0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:11.254080 1554672 cri.go:76] found id: ""
	I0817 01:53:11.254086 1554672 logs.go:270] 1 containers: [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c]
	I0817 01:53:11.254150 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.257322 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 01:53:11.257386 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 01:53:11.280677 1554672 cri.go:76] found id: ""
	I0817 01:53:11.280719 1554672 logs.go:270] 0 containers: []
	W0817 01:53:11.280735 1554672 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 01:53:11.280749 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 01:53:11.280792 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 01:53:11.302301 1554672 cri.go:76] found id: "783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:11.302344 1554672 cri.go:76] found id: ""
	I0817 01:53:11.302359 1554672 logs.go:270] 1 containers: [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892]
	I0817 01:53:11.302405 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.305069 1554672 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 01:53:11.305128 1554672 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 01:53:11.334791 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:11.337025 1554672 cri.go:76] found id: "52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:11.337041 1554672 cri.go:76] found id: ""
	I0817 01:53:11.337046 1554672 logs.go:270] 1 containers: [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393]
	I0817 01:53:11.337097 1554672 ssh_runner.go:149] Run: which crictl
	I0817 01:53:11.340390 1554672 logs.go:123] Gathering logs for kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] ...
	I0817 01:53:11.340407 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13"
	I0817 01:53:11.377298 1554672 logs.go:123] Gathering logs for storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] ...
	I0817 01:53:11.377344 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892"
	I0817 01:53:11.408451 1554672 logs.go:123] Gathering logs for containerd ...
	I0817 01:53:11.408473 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0817 01:53:11.514559 1554672 logs.go:123] Gathering logs for container status ...
	I0817 01:53:11.514589 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 01:53:11.567396 1554672 logs.go:123] Gathering logs for kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] ...
	I0817 01:53:11.567423 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8"
	I0817 01:53:11.625821 1554672 logs.go:123] Gathering logs for etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] ...
	I0817 01:53:11.625847 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4"
	I0817 01:53:11.652282 1554672 logs.go:123] Gathering logs for coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] ...
	I0817 01:53:11.652306 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012"
	I0817 01:53:11.675002 1554672 logs.go:123] Gathering logs for kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] ...
	I0817 01:53:11.675047 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c"
	I0817 01:53:11.697704 1554672 logs.go:123] Gathering logs for kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] ...
	I0817 01:53:11.697724 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393"
	I0817 01:53:11.745590 1554672 logs.go:123] Gathering logs for kubelet ...
	I0817 01:53:11.745611 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 01:53:11.836311 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:11.837956 1554672 logs.go:123] Gathering logs for dmesg ...
	I0817 01:53:11.837993 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 01:53:11.865409 1554672 logs.go:123] Gathering logs for describe nodes ...
	I0817 01:53:11.865430 1554672 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 01:53:12.335417 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:12.834938 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:13.335078 1554672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 01:53:13.835219 1554672 kapi.go:108] duration metric: took 1m6.512977174s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0817 01:53:13.838858 1554672 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, volumesnapshots, olm, registry, gcp-auth, ingress, csi-hostpath-driver
	I0817 01:53:13.838918 1554672 addons.go:344] enableAddons completed in 1m15.630038865s
	I0817 01:53:14.513128 1554672 system_pods.go:59] 18 kube-system pods found
	I0817 01:53:14.513163 1554672 system_pods.go:61] "coredns-558bd4d5db-sxct6" [954e41e3-b5a4-4fa8-8926-fa9f53507414] Running
	I0817 01:53:14.513169 1554672 system_pods.go:61] "csi-hostpath-attacher-0" [ae96aa4a-3965-45a9-8e94-db9fee5bcae1] Running
	I0817 01:53:14.513174 1554672 system_pods.go:61] "csi-hostpath-provisioner-0" [e6cedc35-32e7-4be9-907a-063eaa25f07d] Running
	I0817 01:53:14.513178 1554672 system_pods.go:61] "csi-hostpath-resizer-0" [9c8b4283-1ee1-45a6-ace0-e0096867592a] Running
	I0817 01:53:14.513183 1554672 system_pods.go:61] "csi-hostpath-snapshotter-0" [d23b3d96-e125-4669-a694-40c25a9ca2bc] Running
	I0817 01:53:14.513189 1554672 system_pods.go:61] "csi-hostpathplugin-0" [50373dc3-be79-4049-b2f4-e19bb0a79c10] Running
	I0817 01:53:14.513193 1554672 system_pods.go:61] "etcd-addons-20210817015042-1554185" [b0a759e2-33c6-486b-aee8-e1019669fb12] Running
	I0817 01:53:14.513200 1554672 system_pods.go:61] "kindnet-xp2kn" [234e19f8-3cdd-4c44-9dff-290f932bba79] Running
	I0817 01:53:14.513205 1554672 system_pods.go:61] "kube-apiserver-addons-20210817015042-1554185" [3704ee0c-53da-4106-b407-9c6829a74921] Running
	I0817 01:53:14.513215 1554672 system_pods.go:61] "kube-controller-manager-addons-20210817015042-1554185" [a7e9992d-6083-4141-a778-7ab31067cb40] Running
	I0817 01:53:14.513220 1554672 system_pods.go:61] "kube-proxy-88pjl" [3152779f-8eaa-4982-8a07-a39f7c215086] Running
	I0817 01:53:14.513225 1554672 system_pods.go:61] "kube-scheduler-addons-20210817015042-1554185" [3e4bab6f-a0a1-46e8-83dd-f7b11f4e9d62] Running
	I0817 01:53:14.513229 1554672 system_pods.go:61] "metrics-server-77c99ccb96-x8mh4" [634199c2-0567-4fb7-b5d6-2205aa4fcfd4] Running
	I0817 01:53:14.513238 1554672 system_pods.go:61] "registry-9np4b" [1b228b1c-c9df-4c36-a0f4-2a2fb6ec967b] Running
	I0817 01:53:14.513247 1554672 system_pods.go:61] "registry-proxy-p5xh8" [0a7638e5-9e17-4626-aeb9-b7fe2abe695d] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 01:53:14.513257 1554672 system_pods.go:61] "snapshot-controller-989f9ddc8-rcswn" [0d6290bd-2e2d-4e14-b299-f4b14ea2de3b] Running
	I0817 01:53:14.513264 1554672 system_pods.go:61] "snapshot-controller-989f9ddc8-zqgfr" [7ceff31f-ed10-44dd-8c0f-7063e87beadc] Running
	I0817 01:53:14.513274 1554672 system_pods.go:61] "storage-provisioner" [3f4cb2a6-c88b-486f-bba7-cef64ca39e9a] Running
	I0817 01:53:14.513279 1554672 system_pods.go:74] duration metric: took 3.43559739s to wait for pod list to return data ...
	I0817 01:53:14.513290 1554672 default_sa.go:34] waiting for default service account to be created ...
	I0817 01:53:14.515707 1554672 default_sa.go:45] found service account: "default"
	I0817 01:53:14.515727 1554672 default_sa.go:55] duration metric: took 2.432583ms for default service account to be created ...
	I0817 01:53:14.515734 1554672 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 01:53:14.523274 1554672 system_pods.go:86] 18 kube-system pods found
	I0817 01:53:14.523301 1554672 system_pods.go:89] "coredns-558bd4d5db-sxct6" [954e41e3-b5a4-4fa8-8926-fa9f53507414] Running
	I0817 01:53:14.523309 1554672 system_pods.go:89] "csi-hostpath-attacher-0" [ae96aa4a-3965-45a9-8e94-db9fee5bcae1] Running
	I0817 01:53:14.523314 1554672 system_pods.go:89] "csi-hostpath-provisioner-0" [e6cedc35-32e7-4be9-907a-063eaa25f07d] Running
	I0817 01:53:14.523324 1554672 system_pods.go:89] "csi-hostpath-resizer-0" [9c8b4283-1ee1-45a6-ace0-e0096867592a] Running
	I0817 01:53:14.523332 1554672 system_pods.go:89] "csi-hostpath-snapshotter-0" [d23b3d96-e125-4669-a694-40c25a9ca2bc] Running
	I0817 01:53:14.523338 1554672 system_pods.go:89] "csi-hostpathplugin-0" [50373dc3-be79-4049-b2f4-e19bb0a79c10] Running
	I0817 01:53:14.523346 1554672 system_pods.go:89] "etcd-addons-20210817015042-1554185" [b0a759e2-33c6-486b-aee8-e1019669fb12] Running
	I0817 01:53:14.523351 1554672 system_pods.go:89] "kindnet-xp2kn" [234e19f8-3cdd-4c44-9dff-290f932bba79] Running
	I0817 01:53:14.523364 1554672 system_pods.go:89] "kube-apiserver-addons-20210817015042-1554185" [3704ee0c-53da-4106-b407-9c6829a74921] Running
	I0817 01:53:14.523369 1554672 system_pods.go:89] "kube-controller-manager-addons-20210817015042-1554185" [a7e9992d-6083-4141-a778-7ab31067cb40] Running
	I0817 01:53:14.523377 1554672 system_pods.go:89] "kube-proxy-88pjl" [3152779f-8eaa-4982-8a07-a39f7c215086] Running
	I0817 01:53:14.523382 1554672 system_pods.go:89] "kube-scheduler-addons-20210817015042-1554185" [3e4bab6f-a0a1-46e8-83dd-f7b11f4e9d62] Running
	I0817 01:53:14.523391 1554672 system_pods.go:89] "metrics-server-77c99ccb96-x8mh4" [634199c2-0567-4fb7-b5d6-2205aa4fcfd4] Running
	I0817 01:53:14.523396 1554672 system_pods.go:89] "registry-9np4b" [1b228b1c-c9df-4c36-a0f4-2a2fb6ec967b] Running
	I0817 01:53:14.523405 1554672 system_pods.go:89] "registry-proxy-p5xh8" [0a7638e5-9e17-4626-aeb9-b7fe2abe695d] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 01:53:14.523414 1554672 system_pods.go:89] "snapshot-controller-989f9ddc8-rcswn" [0d6290bd-2e2d-4e14-b299-f4b14ea2de3b] Running
	I0817 01:53:14.523429 1554672 system_pods.go:89] "snapshot-controller-989f9ddc8-zqgfr" [7ceff31f-ed10-44dd-8c0f-7063e87beadc] Running
	I0817 01:53:14.523434 1554672 system_pods.go:89] "storage-provisioner" [3f4cb2a6-c88b-486f-bba7-cef64ca39e9a] Running
	I0817 01:53:14.523439 1554672 system_pods.go:126] duration metric: took 7.700756ms to wait for k8s-apps to be running ...
	I0817 01:53:14.523449 1554672 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 01:53:14.523496 1554672 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 01:53:14.532286 1554672 system_svc.go:56] duration metric: took 8.834069ms WaitForService to wait for kubelet.
	I0817 01:53:14.532341 1554672 kubeadm.go:547] duration metric: took 1m16.323273553s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 01:53:14.532368 1554672 node_conditions.go:102] verifying NodePressure condition ...
	I0817 01:53:14.535572 1554672 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 01:53:14.535600 1554672 node_conditions.go:123] node cpu capacity is 2
	I0817 01:53:14.535613 1554672 node_conditions.go:105] duration metric: took 3.24014ms to run NodePressure ...
	I0817 01:53:14.535627 1554672 start.go:231] waiting for startup goroutines ...
	I0817 01:53:14.849964 1554672 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 01:53:14.851936 1554672 out.go:177] * Done! kubectl is now configured to use "addons-20210817015042-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID
	6f369eecd6011       d544402579747       20 seconds ago      Exited              catalog-operator                         7                   d7bf81a0ac291
	2de95bad668aa       d544402579747       26 seconds ago      Exited              olm-operator                             7                   3745d022afa82
	fcf13ae398afa       1611cd07b61d5       6 minutes ago       Running             busybox                                  0                   f1d131a615a5f
	79b64e9292026       ab63026e5f864       10 minutes ago      Running             liveness-probe                           0                   e1b67cc269ffc
	86d072c7d6f7f       f8f69c8b53974       10 minutes ago      Running             hostpath                                 0                   e1b67cc269ffc
	95f12ea0ee9f4       1f46a863d2aa9       10 minutes ago      Running             node-driver-registrar                    0                   e1b67cc269ffc
	9803a3ca0028f       bac9ddccb0c70       10 minutes ago      Running             controller                               0                   8a5c5f5789f9b
	d76b5b43f143e       b4df90000e547       10 minutes ago      Running             csi-external-health-monitor-controller   0                   e1b67cc269ffc
	33a3dc8565cfc       69724f415cab8       10 minutes ago      Running             csi-attacher                             0                   0ae516c846f1f
	7cf8bb6cdcfe2       a883f7fc35610       10 minutes ago      Exited              patch                                    0                   8703f481d86b7
	b2165d1abb5e5       a883f7fc35610       10 minutes ago      Exited              create                                   0                   95b081ab37530
	eb2360810df1a       e3597035e9357       10 minutes ago      Running             metrics-server                           0                   42d312091cf20
	c3eb735c4bd3e       d65cad97e5f05       10 minutes ago      Running             csi-snapshotter                          0                   558825437a764
	3af33a1255a45       03c15ec36e257       11 minutes ago      Running             csi-provisioner                          0                   cb89551723f57
	2b02be61418e6       63f120615f44b       11 minutes ago      Running             csi-external-health-monitor-agent        0                   e1b67cc269ffc
	1b06e793319cd       3758cfc26c6db       11 minutes ago      Running             volume-snapshot-controller               0                   c81fe44186720
	ecf9efd7a3f01       803606888e0b1       11 minutes ago      Running             csi-resizer                              0                   f99c5fda234ab
	783c0958684bd       ba04bb24b9575       11 minutes ago      Running             storage-provisioner                      0                   0cde084873a62
	13be13e3410ac       1a1f05a2cd7c2       11 minutes ago      Running             coredns                                  0                   00cb17ddd7f4a
	6fe738b9a8dba       3758cfc26c6db       11 minutes ago      Running             volume-snapshot-controller               0                   d0b05273cbb65
	7b33a9bf5802e       f37b7c809e5dc       11 minutes ago      Running             kindnet-cni                              0                   96dbe7c3048af
	0483eb703ed0f       4ea38350a1beb       11 minutes ago      Running             kube-proxy                               0                   f0918af3dc71f
	eacccd844ca10       44a6d50ef170d       12 minutes ago      Running             kube-apiserver                           0                   a18344960e958
	615d16acf0dc7       31a3b96cefc1e       12 minutes ago      Running             kube-scheduler                           0                   99c49ff38f4e8
	29af4eb3039bc       05b738aa1bc63       12 minutes ago      Running             etcd                                     0                   c6d8e2c4d15ca
	52a4c60d098e5       cb310ff289d79       12 minutes ago      Running             kube-controller-manager                  0                   437a86afaf37b
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 01:50:50 UTC, end at Tue 2021-08-17 02:03:50 UTC. --
	Aug 17 02:00:57 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:00:57.896591739Z" level=info msg="PullImage \"nginx:latest\""
	Aug 17 02:00:58 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:00:58.937393718Z" level=error msg="PullImage \"nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:7ef3ca6ca846a10787f98fd2722d6e4054a17b37981a3ca273207a792731aebe: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:03:23 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:23.898439089Z" level=info msg="CreateContainer within sandbox \"3745d022afa82163345f74533a703120cceee0d88aaec3ecd1ec9ccea8b782e8\" for container &ContainerMetadata{Name:olm-operator,Attempt:7,}"
	Aug 17 02:03:23 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:23.924948510Z" level=info msg="CreateContainer within sandbox \"3745d022afa82163345f74533a703120cceee0d88aaec3ecd1ec9ccea8b782e8\" for &ContainerMetadata{Name:olm-operator,Attempt:7,} returns container id \"2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2\""
	Aug 17 02:03:23 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:23.925347054Z" level=info msg="StartContainer for \"2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2\""
	Aug 17 02:03:23 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:23.983952352Z" level=info msg="Finish piping stderr of container \"2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2\""
	Aug 17 02:03:23 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:23.984272266Z" level=info msg="Finish piping stdout of container \"2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2\""
	Aug 17 02:03:23 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:23.988201335Z" level=info msg="StartContainer for \"2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2\" returns successfully"
	Aug 17 02:03:23 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:23.988301084Z" level=info msg="TaskExit event &TaskExit{ContainerID:2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2,ID:2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2,Pid:13270,ExitStatus:1,ExitedAt:2021-08-17 02:03:23.985448648 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 02:03:24 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:24.011278955Z" level=info msg="shim disconnected" id=2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2
	Aug 17 02:03:24 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:24.011329965Z" level=error msg="copy shim log" error="read /proc/self/fd/182: file already closed"
	Aug 17 02:03:24 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:24.773928447Z" level=info msg="RemoveContainer for \"d916b370a712aec71b118ac203deb61c6428f5c97ed959668b11557751746aff\""
	Aug 17 02:03:24 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:24.787390163Z" level=info msg="RemoveContainer for \"d916b370a712aec71b118ac203deb61c6428f5c97ed959668b11557751746aff\" returns successfully"
	Aug 17 02:03:29 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:29.897790810Z" level=info msg="CreateContainer within sandbox \"d7bf81a0ac291012cc28e0a512ee49a912ec3b91bee9ccc531796ff6b9e8eaab\" for container &ContainerMetadata{Name:catalog-operator,Attempt:7,}"
	Aug 17 02:03:29 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:29.917621365Z" level=info msg="CreateContainer within sandbox \"d7bf81a0ac291012cc28e0a512ee49a912ec3b91bee9ccc531796ff6b9e8eaab\" for &ContainerMetadata{Name:catalog-operator,Attempt:7,} returns container id \"6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504\""
	Aug 17 02:03:29 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:29.918004516Z" level=info msg="StartContainer for \"6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504\""
	Aug 17 02:03:29 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:29.966310397Z" level=info msg="Finish piping stderr of container \"6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504\""
	Aug 17 02:03:29 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:29.966389477Z" level=info msg="Finish piping stdout of container \"6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504\""
	Aug 17 02:03:29 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:29.971046144Z" level=info msg="StartContainer for \"6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504\" returns successfully"
	Aug 17 02:03:29 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:29.971148954Z" level=info msg="TaskExit event &TaskExit{ContainerID:6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504,ID:6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504,Pid:13399,ExitStatus:1,ExitedAt:2021-08-17 02:03:29.96792122 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 02:03:29 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:29.996086641Z" level=info msg="shim disconnected" id=6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504
	Aug 17 02:03:29 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:29.996141246Z" level=error msg="copy shim log" error="read /proc/self/fd/182: file already closed"
	Aug 17 02:03:30 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:30.786230850Z" level=info msg="RemoveContainer for \"205c91f27e1fd2a8bd53b53233d919db3a591db8d235093c6d7902b0b9e6485f\""
	Aug 17 02:03:30 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:30.792251179Z" level=info msg="RemoveContainer for \"205c91f27e1fd2a8bd53b53233d919db3a591db8d235093c6d7902b0b9e6485f\" returns successfully"
	Aug 17 02:03:49 addons-20210817015042-1554185 containerd[449]: time="2021-08-17T02:03:49.897218828Z" level=info msg="PullImage \"nginx:latest\""
	
	* 
	* ==> coredns [13be13e3410ac9d65c4f578cc0e7c7e98edfa1d48a0754ca69fb060868905012] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210817015042-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-20210817015042-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=addons-20210817015042-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T01_51_45_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210817015042-1554185
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210817015042-1554185"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 01:51:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210817015042-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 02:03:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 02:02:57 +0000   Tue, 17 Aug 2021 01:51:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 02:02:57 +0000   Tue, 17 Aug 2021 01:51:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 02:02:57 +0000   Tue, 17 Aug 2021 01:51:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 02:02:57 +0000   Tue, 17 Aug 2021 01:52:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210817015042-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                ace180e0-70a7-4178-bffd-233be0529698
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	  default                     task-pv-pod                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  ingress-nginx               ingress-nginx-controller-59b45fb494-d8wsj                100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         11m
	  kube-system                 coredns-558bd4d5db-sxct6                                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     11m
	  kube-system                 csi-hostpath-attacher-0                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 csi-hostpath-provisioner-0                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 csi-hostpath-resizer-0                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 csi-hostpath-snapshotter-0                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 csi-hostpathplugin-0                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-addons-20210817015042-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-xp2kn                                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11m
	  kube-system                 kube-apiserver-addons-20210817015042-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-addons-20210817015042-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-88pjl                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-addons-20210817015042-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 metrics-server-77c99ccb96-x8mh4                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (3%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 snapshot-controller-989f9ddc8-rcswn                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 snapshot-controller-989f9ddc8-zqgfr                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  olm                         catalog-operator-75d496484d-86xl7                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (1%!)(MISSING)        0 (0%!)(MISSING)         11m
	  olm                         olm-operator-859c88c96-j28dd                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (2%!)(MISSING)       0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1070m (53%!)(MISSING)  100m (5%!)(MISSING)
	  memory             850Mi (10%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x4 over 12m)  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x3 over 12m)  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x3 over 12m)  kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet     Node addons-20210817015042-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet     Node addons-20210817015042-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                11m                kubelet     Node addons-20210817015042-1554185 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug17 01:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [29af4eb3039bc5ca86d1529da7eee6cbc04591e49ae0cbd347acf80bb93a92c4] <==
	* 2021-08-17 02:00:02.861483 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:00:12.861367 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:00:22.861142 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:00:32.861210 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:00:42.862049 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:00:52.861822 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:01:02.862078 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:01:12.861501 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:01:22.861071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:01:32.860960 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:01:36.802974 I | mvcc: store.index: compact 1588
	2021-08-17 02:01:36.826662 I | mvcc: finished scheduled compaction at 1588 (took 23.144433ms)
	2021-08-17 02:01:42.862033 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:01:52.861316 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:02:02.861392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:02:12.861284 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:02:22.861252 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:02:32.861856 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:02:42.862067 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:02:52.861377 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:03:02.861122 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:03:12.861370 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:03:22.861110 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:03:32.861132 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:03:42.861971 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  02:03:50 up  9:46,  0 users,  load average: 0.47, 0.55, 1.17
	Linux addons-20210817015042-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [eacccd844ca104cf2a46da19e8301274c9aa6042de8f348cb72b8873c466fdc8] <==
	* I0817 01:58:45.412987       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 01:59:16.329702       1 client.go:360] parsed scheme: "passthrough"
	I0817 01:59:16.329841       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 01:59:16.329861       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 01:59:53.260326       1 client.go:360] parsed scheme: "passthrough"
	I0817 01:59:53.260472       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 01:59:53.260490       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:00:36.142867       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:00:36.142995       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:00:36.143009       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:01:17.263282       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:01:17.263422       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:01:17.263440       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:01:59.911534       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:01:59.911574       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:01:59.911582       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:02:32.231871       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:02:32.231909       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:02:32.231918       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:03:04.787394       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:03:04.787437       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:03:04.787446       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:03:39.645013       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:03:39.645160       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:03:39.645180       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [52a4c60d098e5447e91377ab7a25c2ee6063461171c47e2d54dd9592a167f393] <==
	* I0817 01:52:27.209952       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
	I0817 01:52:27.209987       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com
	I0817 01:52:27.210060       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for installplans.operators.coreos.com
	I0817 01:52:27.211453       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0817 01:52:27.412457       1 shared_informer.go:247] Caches are synced for resource quota 
	W0817 01:52:27.565215       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 01:52:27.570191       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0817 01:52:27.585755       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0817 01:52:27.587067       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0817 01:52:27.788117       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 01:52:33.056834       1 event.go:291] "Event occurred" object="kube-system/registry-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: registry-proxy-p5xh8"
	E0817 01:52:33.075112       1 daemon_controller.go:320] kube-system/registry-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"registry-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"bbc76700-77ff-4df0-928a-e381ef3cf185", ResourceVersion:"486", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764761920, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "kubernetes.io/minikube-addons":"registry"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"kubernetes.io/minikube-addons\":\"registry\"},\"name\":\"regist
ry-proxy\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"kubernetes.io/minikube-addons\":\"registry\",\"registry-proxy\":\"true\"}},\"template\":{\"metadata\":{\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"kubernetes.io/minikube-addons\":\"registry\",\"registry-proxy\":\"true\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"REGISTRY_HOST\",\"value\":\"registry.kube-system.svc.cluster.local\"},{\"name\":\"REGISTRY_PORT\",\"value\":\"80\"}],\"image\":\"gcr.io/google_containers/kube-registry-proxy:0.4@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"registry-proxy\",\"ports\":[{\"containerPort\":80,\"hostPort\":5000,\"name\":\"registry\"}]}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d7e
000), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d7e018)}, v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d7e030), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d7e048)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001b3d3e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "kubernetes.io/minikube-addons":"registry", "registry-proxy":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(n
il), Containers:[]v1.Container{v1.Container{Name:"registry-proxy", Image:"gcr.io/google_containers/kube-registry-proxy:0.4@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"registry", HostPort:5000, ContainerPort:80, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"REGISTRY_HOST", Value:"registry.kube-system.svc.cluster.local", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"REGISTRY_PORT", Value:"80", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPre
sent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001d2d158), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004f7d50), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:
v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001d63790)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001d2d16c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "registry-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0817 01:52:36.883695       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0817 01:52:56.050693       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0817 01:52:56.851746       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0817 01:52:57.251384       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	E0817 01:52:57.435870       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0817 01:52:57.652302       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	W0817 01:52:57.808910       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 01:57:26.704983       1 tokens_controller.go:262] error synchronizing serviceaccount gcp-auth/default: secrets "default-token-zg7wn" is forbidden: unable to create new content in namespace gcp-auth because it is being terminated
	I0817 01:57:48.400672       1 event.go:291] "Event occurred" object="default/hpvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0817 01:57:48.797797       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-edf3d92e-1108-4adc-a8cd-37519395465d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^85c325d7-fefe-11eb-bd30-26acc1e90309") from node "addons-20210817015042-1554185" 
	I0817 01:57:49.345399       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume "pvc-edf3d92e-1108-4adc-a8cd-37519395465d" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^85c325d7-fefe-11eb-bd30-26acc1e90309") from node "addons-20210817015042-1554185" 
	I0817 01:57:49.345604       1 event.go:291] "Event occurred" object="default/task-pv-pod" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-edf3d92e-1108-4adc-a8cd-37519395465d\" "
	I0817 01:57:53.078416       1 namespace_controller.go:185] Namespace has been deleted gcp-auth
	
	* 
	* ==> kube-proxy [0483eb703ed0ff8cf28f926eb4d6fba94515fe38bf8afee8d969ad96a0b9092c] <==
	* I0817 01:51:59.199305       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 01:51:59.199348       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 01:51:59.199381       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 01:51:59.228513       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 01:51:59.228548       1 server_others.go:212] Using iptables Proxier.
	I0817 01:51:59.228558       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 01:51:59.228568       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 01:51:59.229489       1 server.go:643] Version: v1.21.3
	I0817 01:51:59.234867       1 config.go:315] Starting service config controller
	I0817 01:51:59.234890       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 01:51:59.236683       1 config.go:224] Starting endpoint slice config controller
	I0817 01:51:59.236698       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 01:51:59.242351       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 01:51:59.243149       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 01:51:59.338912       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 01:51:59.338971       1 shared_informer.go:247] Caches are synced for service config 
	W0817 01:58:09.244582       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [615d16acf0dc7d0f4da0f95c83dbdcd3e7aa6c26229e78cc73c70576847fea13] <==
	* W0817 01:51:41.468231       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 01:51:41.468338       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 01:51:41.468430       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 01:51:41.611648       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 01:51:41.615019       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 01:51:41.616612       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0817 01:51:41.616756       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 01:51:41.622145       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 01:51:41.624737       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 01:51:41.627800       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 01:51:41.628373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 01:51:41.628434       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 01:51:41.628492       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 01:51:41.628547       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 01:51:41.628600       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:41.628650       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:41.628702       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:41.628754       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 01:51:41.628805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 01:51:41.630964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 01:51:41.631026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 01:51:42.555258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 01:51:42.563233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 01:51:42.595603       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 01:51:44.616129       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 01:50:50 UTC, end at Tue 2021-08-17 02:03:51 UTC. --
	Aug 17 02:03:08 addons-20210817015042-1554185 kubelet[1147]: E0817 02:03:08.897543    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/task-pv-pod" podUID=c51fcafb-3087-4d61-8189-5d6ec7ef33ac
	Aug 17 02:03:18 addons-20210817015042-1554185 kubelet[1147]: I0817 02:03:18.895792    1147 scope.go:111] "RemoveContainer" containerID="205c91f27e1fd2a8bd53b53233d919db3a591db8d235093c6d7902b0b9e6485f"
	Aug 17 02:03:18 addons-20210817015042-1554185 kubelet[1147]: E0817 02:03:18.896559    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 02:03:23 addons-20210817015042-1554185 kubelet[1147]: I0817 02:03:23.895611    1147 scope.go:111] "RemoveContainer" containerID="d916b370a712aec71b118ac203deb61c6428f5c97ed959668b11557751746aff"
	Aug 17 02:03:23 addons-20210817015042-1554185 kubelet[1147]: E0817 02:03:23.897217    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/task-pv-pod" podUID=c51fcafb-3087-4d61-8189-5d6ec7ef33ac
	Aug 17 02:03:24 addons-20210817015042-1554185 kubelet[1147]: I0817 02:03:24.772043    1147 scope.go:111] "RemoveContainer" containerID="d916b370a712aec71b118ac203deb61c6428f5c97ed959668b11557751746aff"
	Aug 17 02:03:24 addons-20210817015042-1554185 kubelet[1147]: I0817 02:03:24.772343    1147 scope.go:111] "RemoveContainer" containerID="2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2"
	Aug 17 02:03:24 addons-20210817015042-1554185 kubelet[1147]: E0817 02:03:24.772696    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	Aug 17 02:03:25 addons-20210817015042-1554185 kubelet[1147]: W0817 02:03:25.464369    1147 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/burstable/podbc13a715-3c7d-486c-846e-64675afe63d0/2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2 WatchSource:0}: task 2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2 not found: not found
	Aug 17 02:03:27 addons-20210817015042-1554185 kubelet[1147]: I0817 02:03:27.402672    1147 scope.go:111] "RemoveContainer" containerID="2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2"
	Aug 17 02:03:27 addons-20210817015042-1554185 kubelet[1147]: E0817 02:03:27.403088    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	Aug 17 02:03:27 addons-20210817015042-1554185 kubelet[1147]: I0817 02:03:27.777961    1147 scope.go:111] "RemoveContainer" containerID="2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2"
	Aug 17 02:03:27 addons-20210817015042-1554185 kubelet[1147]: E0817 02:03:27.778375    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	Aug 17 02:03:29 addons-20210817015042-1554185 kubelet[1147]: I0817 02:03:29.895979    1147 scope.go:111] "RemoveContainer" containerID="205c91f27e1fd2a8bd53b53233d919db3a591db8d235093c6d7902b0b9e6485f"
	Aug 17 02:03:30 addons-20210817015042-1554185 kubelet[1147]: I0817 02:03:30.784588    1147 scope.go:111] "RemoveContainer" containerID="205c91f27e1fd2a8bd53b53233d919db3a591db8d235093c6d7902b0b9e6485f"
	Aug 17 02:03:30 addons-20210817015042-1554185 kubelet[1147]: I0817 02:03:30.784888    1147 scope.go:111] "RemoveContainer" containerID="6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504"
	Aug 17 02:03:30 addons-20210817015042-1554185 kubelet[1147]: E0817 02:03:30.785250    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 02:03:31 addons-20210817015042-1554185 kubelet[1147]: W0817 02:03:31.446449    1147 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/burstable/podab50ee4e-7255-4a75-b1d3-6cf397a713a6/6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504 WatchSource:0}: task 6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504 not found: not found
	Aug 17 02:03:37 addons-20210817015042-1554185 kubelet[1147]: I0817 02:03:37.401601    1147 scope.go:111] "RemoveContainer" containerID="6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504"
	Aug 17 02:03:37 addons-20210817015042-1554185 kubelet[1147]: E0817 02:03:37.402393    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 02:03:37 addons-20210817015042-1554185 kubelet[1147]: I0817 02:03:37.799693    1147 scope.go:111] "RemoveContainer" containerID="6f369eecd60119613aab89f8f79cf1f78cde126953416ce8d8660b8130a1c504"
	Aug 17 02:03:37 addons-20210817015042-1554185 kubelet[1147]: E0817 02:03:37.800100    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-86xl7_olm(ab50ee4e-7255-4a75-b1d3-6cf397a713a6)\"" pod="olm/catalog-operator-75d496484d-86xl7" podUID=ab50ee4e-7255-4a75-b1d3-6cf397a713a6
	Aug 17 02:03:37 addons-20210817015042-1554185 kubelet[1147]: E0817 02:03:37.896158    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/task-pv-pod" podUID=c51fcafb-3087-4d61-8189-5d6ec7ef33ac
	Aug 17 02:03:40 addons-20210817015042-1554185 kubelet[1147]: I0817 02:03:40.896035    1147 scope.go:111] "RemoveContainer" containerID="2de95bad668aa3010125750c5c0f7833d97908b2d81504fdf1c2fc6e3e8da2e2"
	Aug 17 02:03:40 addons-20210817015042-1554185 kubelet[1147]: E0817 02:03:40.896410    1147 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-j28dd_olm(bc13a715-3c7d-486c-846e-64675afe63d0)\"" pod="olm/olm-operator-859c88c96-j28dd" podUID=bc13a715-3c7d-486c-846e-64675afe63d0
	
	* 
	* ==> storage-provisioner [783c0958684bd9e8a9a848e0a5d52f1694818047aa6cc914f985737d5883e892] <==
	* I0817 01:52:45.168349       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 01:52:45.223745       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 01:52:45.226921       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 01:52:45.243264       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 01:52:45.243748       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"41860dbd-59f4-40f3-b06c-d38f89989bf1", APIVersion:"v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210817015042-1554185_09f87493-b972-47f4-9131-044097fd5d01 became leader
	I0817 01:52:45.243789       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210817015042-1554185_09f87493-b972-47f4-9131-044097fd5d01!
	I0817 01:52:45.346906       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210817015042-1554185_09f87493-b972-47f4-9131-044097fd5d01!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210817015042-1554185 -n addons-20210817015042-1554185
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210817015042-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: task-pv-pod ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210817015042-1554185 describe pod task-pv-pod ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210817015042-1554185 describe pod task-pv-pod ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j: exit status 1 (142.995806ms)

                                                
                                                
-- stdout --
	Name:         task-pv-pod
	Namespace:    default
	Priority:     0
	Node:         addons-20210817015042-1554185/192.168.49.2
	Start Time:   Tue, 17 Aug 2021 01:57:48 +0000
	Labels:       app=task-pv-pod
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jc6x7 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-jc6x7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                   Age                    From                                      Message
	  ----     ------                   ----                   ----                                      -------
	  Normal   Scheduled                6m4s                   default-scheduler                         Successfully assigned default/task-pv-pod to addons-20210817015042-1554185
	  Warning  VolumeConditionAbnormal  6m4s (x10 over 6m4s)   csi-pv-monitor-agent-hostpath.csi.k8s.io  The volume isn't mounted
	  Normal   SuccessfulAttachVolume   6m3s                   attachdetach-controller                   AttachVolume.Attach succeeded for volume "pvc-edf3d92e-1108-4adc-a8cd-37519395465d"
	  Warning  Failed                   5m18s                  kubelet                                   Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:7ef3ca6ca846a10787f98fd2722d6e4054a17b37981a3ca273207a792731aebe: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling                  4m28s (x4 over 5m56s)  kubelet                                   Pulling image "nginx"
	  Warning  Failed                   4m27s (x3 over 5m55s)  kubelet                                   Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed                   4m27s (x4 over 5m55s)  kubelet                                   Error: ErrImagePull
	  Warning  Failed                   4m13s (x6 over 5m54s)  kubelet                                   Error: ImagePullBackOff
	  Normal   VolumeConditionNormal    64s (x41 over 5m4s)    csi-pv-monitor-agent-hostpath.csi.k8s.io  The Volume returns to the healthy state
	  Normal   BackOff                  56s (x20 over 5m54s)   kubelet                                   Back-off pulling image "nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-msw6w" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xpb6j" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210817015042-1554185 describe pod task-pv-pod ingress-nginx-admission-create-msw6w ingress-nginx-admission-patch-xpb6j: exit status 1
--- FAIL: TestAddons/parallel/CSI (363.91s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (188.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [f757e77c-ddf8-4d74-8754-424b9f0da712] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006757918s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210817021007-1554185 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210817021007-1554185 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210817021007-1554185 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210817021007-1554185 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [42fb579e-9919-4aa5-a93f-5e2576652c96] Pending
helpers_test.go:343: "sp-pod" [42fb579e-9919-4aa5-a93f-5e2576652c96] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0817 02:13:55.845563 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:14:36.806133 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:15:58.727249 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: timed out waiting for the condition ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-20210817021007-1554185 -n functional-20210817021007-1554185
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2021-08-17 02:16:50.200983052 +0000 UTC m=+1640.995730844
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-20210817021007-1554185 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-20210817021007-1554185 describe po sp-pod -n default:
Name:         sp-pod
Namespace:    default
Priority:     0
Node:         functional-20210817021007-1554185/192.168.49.2
Start Time:   Tue, 17 Aug 2021 02:13:49 +0000
Labels:       test=storage-provisioner
Annotations:  <none>
Status:       Pending
IP:           10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
myfrontend:
Container ID:   
Image:          nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mcwlv (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-mcwlv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  3m                    default-scheduler  Successfully assigned default/sp-pod to functional-20210817021007-1554185
Normal   Pulling    103s (x4 over 3m)     kubelet            Pulling image "nginx"
Warning  Failed     102s (x4 over 2m59s)  kubelet            Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     102s (x4 over 2m59s)  kubelet            Error: ErrImagePull
Warning  Failed     75s (x6 over 2m58s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    61s (x7 over 2m58s)   kubelet            Back-off pulling image "nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-20210817021007-1554185 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-20210817021007-1554185 logs sp-pod -n default: exit status 1 (99.00824ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-20210817021007-1554185 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20210817021007-1554185
helpers_test.go:236: (dbg) docker inspect functional-20210817021007-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a09a5edabdd8ef6788b4b8320e88aded5dc3f3ea47175f8d3b82e43f8cc581f",
	        "Created": "2021-08-17T02:10:08.384823248Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1577442,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T02:10:08.832682993Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/3a09a5edabdd8ef6788b4b8320e88aded5dc3f3ea47175f8d3b82e43f8cc581f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a09a5edabdd8ef6788b4b8320e88aded5dc3f3ea47175f8d3b82e43f8cc581f/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a09a5edabdd8ef6788b4b8320e88aded5dc3f3ea47175f8d3b82e43f8cc581f/hosts",
	        "LogPath": "/var/lib/docker/containers/3a09a5edabdd8ef6788b4b8320e88aded5dc3f3ea47175f8d3b82e43f8cc581f/3a09a5edabdd8ef6788b4b8320e88aded5dc3f3ea47175f8d3b82e43f8cc581f-json.log",
	        "Name": "/functional-20210817021007-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20210817021007-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20210817021007-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/db26eeba6313c2f4c4b298a3f1f489af326b947a7b63e73db5690af276f7b2ed-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db26eeba6313c2f4c4b298a3f1f489af326b947a7b63e73db5690af276f7b2ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db26eeba6313c2f4c4b298a3f1f489af326b947a7b63e73db5690af276f7b2ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db26eeba6313c2f4c4b298a3f1f489af326b947a7b63e73db5690af276f7b2ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20210817021007-1554185",
	                "Source": "/var/lib/docker/volumes/functional-20210817021007-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20210817021007-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20210817021007-1554185",
	                "name.minikube.sigs.k8s.io": "functional-20210817021007-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "80a595a224f313e0fb37fad778fd6386bd0bd60d5a060ad8f96d4d828e4a03f0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50324"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50323"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50320"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50322"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50321"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/80a595a224f3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20210817021007-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3a09a5edabdd",
	                        "functional-20210817021007-1554185"
	                    ],
	                    "NetworkID": "166d57bccf5218156918b5f2c2dbeef588244fee8dde040bbdcb35c4f9031abc",
	                    "EndpointID": "cb8431a21bd502ec342dd4f59e1df650f0f626d000d09c2ae9b2e96fadf8b9fa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-20210817021007-1554185 -n functional-20210817021007-1554185
helpers_test.go:245: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p functional-20210817021007-1554185 logs -n 25: (1.067578633s)
helpers_test.go:253: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------|-----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                                Args                                |              Profile              |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------------------------------|-----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:25 UTC | Tue, 17 Aug 2021 02:13:25 UTC |
	|         | image ls                                                           |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185 image load                       | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:26 UTC | Tue, 17 Aug 2021 02:13:26 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_arm64/busybox.tar  |                                   |         |         |                               |                               |
	| ssh     | -p                                                                 | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:26 UTC | Tue, 17 Aug 2021 02:13:27 UTC |
	|         | functional-20210817021007-1554185                                  |                                   |         |         |                               |                               |
	|         | -- sudo crictl images                                              |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185 image build -t                   | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:25 UTC | Tue, 17 Aug 2021 02:13:28 UTC |
	|         | localhost/my-image:functional-20210817021007-1554185               |                                   |         |         |                               |                               |
	|         | testdata/build                                                     |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185 image load                       | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:27 UTC | Tue, 17 Aug 2021 02:13:28 UTC |
	|         | docker.io/library/busybox:remove-functional-20210817021007-1554185 |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185 image rm                         | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:28 UTC | Tue, 17 Aug 2021 02:13:28 UTC |
	|         | docker.io/library/busybox:remove-functional-20210817021007-1554185 |                                   |         |         |                               |                               |
	| ssh     | -p                                                                 | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:28 UTC | Tue, 17 Aug 2021 02:13:28 UTC |
	|         | functional-20210817021007-1554185                                  |                                   |         |         |                               |                               |
	|         | -- sudo crictl images                                              |                                   |         |         |                               |                               |
	| ssh     | -p                                                                 | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:28 UTC | Tue, 17 Aug 2021 02:13:29 UTC |
	|         | functional-20210817021007-1554185                                  |                                   |         |         |                               |                               |
	|         | -- sudo crictl images                                              |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185 image load                       | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:29 UTC | Tue, 17 Aug 2021 02:13:30 UTC |
	|         | docker.io/library/busybox:load-functional-20210817021007-1554185   |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:29 UTC | Tue, 17 Aug 2021 02:13:30 UTC |
	|         | logs -n 25                                                         |                                   |         |         |                               |                               |
	| ssh     | -p functional-20210817021007-1554185 -- sudo crictl inspecti       | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:30 UTC | Tue, 17 Aug 2021 02:13:30 UTC |
	|         | docker.io/library/busybox:load-functional-20210817021007-1554185   |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:31 UTC | Tue, 17 Aug 2021 02:13:31 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /etc/ssl/certs/1554185.pem                                         |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:31 UTC | Tue, 17 Aug 2021 02:13:31 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /usr/share/ca-certificates/1554185.pem                             |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:31 UTC | Tue, 17 Aug 2021 02:13:32 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /etc/ssl/certs/51391683.0                                          |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:32 UTC | Tue, 17 Aug 2021 02:13:32 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /etc/ssl/certs/15541852.pem                                        |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:32 UTC | Tue, 17 Aug 2021 02:13:32 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /usr/share/ca-certificates/15541852.pem                            |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:32 UTC | Tue, 17 Aug 2021 02:13:32 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /etc/ssl/certs/3ec20f2e.0                                          |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:32 UTC | Tue, 17 Aug 2021 02:13:33 UTC |
	|         | cp testdata/cp-test.txt                                            |                                   |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                           |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:33 UTC | Tue, 17 Aug 2021 02:13:33 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                           |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:33 UTC | Tue, 17 Aug 2021 02:13:33 UTC |
	|         | ssh echo hello                                                     |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:33 UTC | Tue, 17 Aug 2021 02:13:34 UTC |
	|         | ssh cat /etc/hostname                                              |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:42 UTC | Tue, 17 Aug 2021 02:13:43 UTC |
	|         | service list                                                       |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:43 UTC | Tue, 17 Aug 2021 02:13:43 UTC |
	|         | service --namespace=default                                        |                                   |         |         |                               |                               |
	|         | --https --url hello-node                                           |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:43 UTC | Tue, 17 Aug 2021 02:13:43 UTC |
	|         | service hello-node --url                                           |                                   |         |         |                               |                               |
	|         | --format={{.IP}}                                                   |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                  | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:43 UTC | Tue, 17 Aug 2021 02:13:44 UTC |
	|         | service hello-node --url                                           |                                   |         |         |                               |                               |
	|---------|--------------------------------------------------------------------|-----------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 02:12:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 02:12:35.909280 1581051 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:12:35.909413 1581051 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:12:35.909417 1581051 out.go:311] Setting ErrFile to fd 2...
	I0817 02:12:35.909419 1581051 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:12:35.909548 1581051 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:12:35.909778 1581051 out.go:305] Setting JSON to false
	I0817 02:12:35.910670 1581051 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":35694,"bootTime":1629130662,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 02:12:35.910739 1581051 start.go:121] virtualization:  
	I0817 02:12:35.913122 1581051 out.go:177] * [functional-20210817021007-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 02:12:35.915776 1581051 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 02:12:35.914109 1581051 notify.go:169] Checking for updates...
	I0817 02:12:35.917423 1581051 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:12:35.919141 1581051 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 02:12:35.920764 1581051 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 02:12:35.921191 1581051 config.go:177] Loaded profile config "functional-20210817021007-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:12:35.921222 1581051 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 02:12:35.962249 1581051 docker.go:132] docker version: linux-20.10.8
	I0817 02:12:35.962323 1581051 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:12:36.058480 1581051 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 02:12:36.002195045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:12:36.058571 1581051 docker.go:244] overlay module found
	I0817 02:12:36.060511 1581051 out.go:177] * Using the docker driver based on existing profile
	I0817 02:12:36.060531 1581051 start.go:278] selected driver: docker
	I0817 02:12:36.060536 1581051 start.go:751] validating driver "docker" against &{Name:functional-20210817021007-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210817021007-1554185 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:tr
ue storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:12:36.060649 1581051 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 02:12:36.060760 1581051 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:12:36.150669 1581051 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 02:12:36.097591982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:12:36.151096 1581051 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 02:12:36.151113 1581051 cni.go:93] Creating CNI manager for ""
	I0817 02:12:36.151118 1581051 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:12:36.151125 1581051 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 02:12:36.151130 1581051 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 02:12:36.151134 1581051 start_flags.go:277] config:
	{Name:functional-20210817021007-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210817021007-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:tr
ue storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:12:36.153581 1581051 out.go:177] * Starting control plane node functional-20210817021007-1554185 in cluster functional-20210817021007-1554185
	I0817 02:12:36.153616 1581051 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 02:12:36.155802 1581051 out.go:177] * Pulling base image ...
	I0817 02:12:36.155823 1581051 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:12:36.155854 1581051 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 02:12:36.155860 1581051 cache.go:56] Caching tarball of preloaded images
	I0817 02:12:36.155996 1581051 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 02:12:36.156025 1581051 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 02:12:36.156154 1581051 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/config.json ...
	I0817 02:12:36.156363 1581051 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 02:12:36.200726 1581051 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 02:12:36.200738 1581051 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 02:12:36.200754 1581051 cache.go:205] Successfully downloaded all kic artifacts
	I0817 02:12:36.200792 1581051 start.go:313] acquiring machines lock for functional-20210817021007-1554185: {Name:mkbbae4b071b337a918efc1882b1450ea6e84bdc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 02:12:36.200887 1581051 start.go:317] acquired machines lock for "functional-20210817021007-1554185" in 74.182µs
	I0817 02:12:36.200905 1581051 start.go:93] Skipping create...Using existing machine configuration
	I0817 02:12:36.200909 1581051 fix.go:55] fixHost starting: 
	I0817 02:12:36.201185 1581051 cli_runner.go:115] Run: docker container inspect functional-20210817021007-1554185 --format={{.State.Status}}
	I0817 02:12:36.231823 1581051 fix.go:108] recreateIfNeeded on functional-20210817021007-1554185: state=Running err=<nil>
	W0817 02:12:36.231837 1581051 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 02:12:36.233868 1581051 out.go:177] * Updating the running docker "functional-20210817021007-1554185" container ...
	I0817 02:12:36.233891 1581051 machine.go:88] provisioning docker machine ...
	I0817 02:12:36.233904 1581051 ubuntu.go:169] provisioning hostname "functional-20210817021007-1554185"
	I0817 02:12:36.233961 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:36.264819 1581051 main.go:130] libmachine: Using SSH client type: native
	I0817 02:12:36.264986 1581051 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50324 <nil> <nil>}
	I0817 02:12:36.264998 1581051 main.go:130] libmachine: About to run SSH command:
	sudo hostname functional-20210817021007-1554185 && echo "functional-20210817021007-1554185" | sudo tee /etc/hostname
	I0817 02:12:36.389744 1581051 main.go:130] libmachine: SSH cmd err, output: <nil>: functional-20210817021007-1554185
	
	I0817 02:12:36.389820 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:36.422956 1581051 main.go:130] libmachine: Using SSH client type: native
	I0817 02:12:36.423123 1581051 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50324 <nil> <nil>}
	I0817 02:12:36.423145 1581051 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20210817021007-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20210817021007-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20210817021007-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 02:12:36.538050 1581051 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 02:12:36.538065 1581051 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 02:12:36.538083 1581051 ubuntu.go:177] setting up certificates
	I0817 02:12:36.538091 1581051 provision.go:83] configureAuth start
	I0817 02:12:36.538145 1581051 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20210817021007-1554185
	I0817 02:12:36.569390 1581051 provision.go:138] copyHostCerts
	I0817 02:12:36.569436 1581051 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 02:12:36.569442 1581051 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 02:12:36.569490 1581051 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 02:12:36.569565 1581051 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 02:12:36.569571 1581051 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 02:12:36.569588 1581051 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 02:12:36.569632 1581051 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 02:12:36.569636 1581051 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 02:12:36.569650 1581051 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 02:12:36.569690 1581051 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.functional-20210817021007-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20210817021007-1554185]
	I0817 02:12:37.156727 1581051 provision.go:172] copyRemoteCerts
	I0817 02:12:37.156780 1581051 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 02:12:37.156818 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:37.188687 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:12:37.273344 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 02:12:37.288909 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0817 02:12:37.304156 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 02:12:37.319497 1581051 provision.go:86] duration metric: configureAuth took 781.387536ms
	I0817 02:12:37.319510 1581051 ubuntu.go:193] setting minikube options for container-runtime
	I0817 02:12:37.319709 1581051 config.go:177] Loaded profile config "functional-20210817021007-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:12:37.319717 1581051 machine.go:91] provisioned docker machine in 1.085820111s
	I0817 02:12:37.319722 1581051 start.go:267] post-start starting for "functional-20210817021007-1554185" (driver="docker")
	I0817 02:12:37.319727 1581051 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 02:12:37.319780 1581051 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 02:12:37.319814 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:37.354886 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:12:37.441193 1581051 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 02:12:37.443710 1581051 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 02:12:37.443724 1581051 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 02:12:37.443734 1581051 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 02:12:37.443740 1581051 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 02:12:37.443747 1581051 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 02:12:37.443792 1581051 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 02:12:37.443865 1581051 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 02:12:37.443947 1581051 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/test/nested/copy/1554185/hosts -> hosts in /etc/test/nested/copy/1554185
	I0817 02:12:37.443979 1581051 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1554185
	I0817 02:12:37.449868 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:12:37.464996 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/test/nested/copy/1554185/hosts --> /etc/test/nested/copy/1554185/hosts (40 bytes)
	I0817 02:12:37.479849 1581051 start.go:270] post-start completed in 160.118039ms
	I0817 02:12:37.479892 1581051 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 02:12:37.479926 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:37.511901 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:12:37.594992 1581051 fix.go:57] fixHost completed within 1.394076934s
	I0817 02:12:37.595007 1581051 start.go:80] releasing machines lock for "functional-20210817021007-1554185", held for 1.394112528s
	I0817 02:12:37.595096 1581051 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20210817021007-1554185
	I0817 02:12:37.632593 1581051 ssh_runner.go:149] Run: systemctl --version
	I0817 02:12:37.632634 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:37.632864 1581051 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 02:12:37.632912 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:37.676811 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:12:37.690496 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:12:37.892619 1581051 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 02:12:37.902530 1581051 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 02:12:37.910908 1581051 docker.go:153] disabling docker service ...
	I0817 02:12:37.910944 1581051 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 02:12:37.919628 1581051 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 02:12:37.929553 1581051 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 02:12:38.028997 1581051 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 02:12:38.129235 1581051 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 02:12:38.139023 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 02:12:38.150529 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 02:12:38.167340 1581051 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 02:12:38.172829 1581051 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 02:12:38.178373 1581051 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 02:12:38.274556 1581051 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 02:12:38.365061 1581051 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 02:12:38.365121 1581051 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 02:12:38.368962 1581051 start.go:413] Will wait 60s for crictl version
	I0817 02:12:38.369020 1581051 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:12:38.411834 1581051 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T02:12:38Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 02:12:49.458800 1581051 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:12:49.480696 1581051 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 02:12:49.480743 1581051 ssh_runner.go:149] Run: containerd --version
	I0817 02:12:49.501093 1581051 ssh_runner.go:149] Run: containerd --version
	I0817 02:12:49.522194 1581051 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 02:12:49.522280 1581051 cli_runner.go:115] Run: docker network inspect functional-20210817021007-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 02:12:49.553292 1581051 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 02:12:49.558320 1581051 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0817 02:12:49.558385 1581051 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:12:49.558439 1581051 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:12:49.581251 1581051 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:12:49.581259 1581051 containerd.go:517] Images already preloaded, skipping extraction
	I0817 02:12:49.581296 1581051 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:12:49.603019 1581051 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:12:49.603028 1581051 cache_images.go:74] Images are preloaded, skipping loading
	I0817 02:12:49.603064 1581051 ssh_runner.go:149] Run: sudo crictl info
	I0817 02:12:49.627885 1581051 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0817 02:12:49.627907 1581051 cni.go:93] Creating CNI manager for ""
	I0817 02:12:49.627915 1581051 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:12:49.627923 1581051 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 02:12:49.627935 1581051 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20210817021007-1554185 NodeName:functional-20210817021007-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:
map[]}
	I0817 02:12:49.628062 1581051 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "functional-20210817021007-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 02:12:49.628176 1581051 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-20210817021007-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:functional-20210817021007-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0817 02:12:49.628224 1581051 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 02:12:49.634168 1581051 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 02:12:49.634205 1581051 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 02:12:49.639828 1581051 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (578 bytes)
	I0817 02:12:49.650894 1581051 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 02:12:49.662258 1581051 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1933 bytes)
	I0817 02:12:49.673199 1581051 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 02:12:49.675846 1581051 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185 for IP: 192.168.49.2
	I0817 02:12:49.675880 1581051 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 02:12:49.675891 1581051 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 02:12:49.675944 1581051 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.key
	I0817 02:12:49.675958 1581051 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/apiserver.key.dd3b5fb2
	I0817 02:12:49.675971 1581051 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/proxy-client.key
	I0817 02:12:49.676072 1581051 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 02:12:49.676104 1581051 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 02:12:49.676112 1581051 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 02:12:49.676133 1581051 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 02:12:49.676153 1581051 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 02:12:49.676172 1581051 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 02:12:49.676212 1581051 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:12:49.677932 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 02:12:49.697926 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 02:12:49.712601 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 02:12:49.727622 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 02:12:49.742153 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 02:12:49.756965 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 02:12:49.771706 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 02:12:49.786157 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 02:12:49.800735 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 02:12:49.815663 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 02:12:49.830168 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 02:12:49.844530 1581051 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 02:12:49.855418 1581051 ssh_runner.go:149] Run: openssl version
	I0817 02:12:49.859697 1581051 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 02:12:49.866082 1581051 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 02:12:49.868776 1581051 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 02:12:49.868816 1581051 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 02:12:49.873038 1581051 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 02:12:49.878798 1581051 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 02:12:49.884983 1581051 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:12:49.887673 1581051 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:12:49.887713 1581051 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:12:49.891949 1581051 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 02:12:49.897792 1581051 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 02:12:49.907841 1581051 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 02:12:49.910529 1581051 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 02:12:49.910567 1581051 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 02:12:49.914994 1581051 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 02:12:49.920794 1581051 kubeadm.go:390] StartCluster: {Name:functional-20210817021007-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210817021007-1554185 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:12:49.920892 1581051 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 02:12:49.920931 1581051 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:12:49.943709 1581051 cri.go:76] found id: "dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9"
	I0817 02:12:49.943719 1581051 cri.go:76] found id: "2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1"
	I0817 02:12:49.943724 1581051 cri.go:76] found id: "f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089"
	I0817 02:12:49.943728 1581051 cri.go:76] found id: "50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1"
	I0817 02:12:49.943731 1581051 cri.go:76] found id: "c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231"
	I0817 02:12:49.943735 1581051 cri.go:76] found id: "22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d"
	I0817 02:12:49.943739 1581051 cri.go:76] found id: "2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48"
	I0817 02:12:49.943742 1581051 cri.go:76] found id: "f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6"
	I0817 02:12:49.943746 1581051 cri.go:76] found id: ""
	I0817 02:12:49.943780 1581051 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:12:49.979547 1581051 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537","pid":1879,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537/rootfs","created":"2021-08-17T02:12:05.940254331Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_f757e77c-ddf8-4d74-8754-424b9f0da712"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48","pid":1082,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2003791d6509e9ecb04e813cff2a13208974
2b896aea918cb0421a664f7f1f48","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48/rootfs","created":"2021-08-17T02:10:53.140506409Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d","pid":1080,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d/rootfs","created":"2021-08-17T02:10:53.141708982Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-
id":"af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1","pid":1968,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1/rootfs","created":"2021-08-17T02:12:06.069447788Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d","pid":1511,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d","rootfs":
"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d/rootfs","created":"2021-08-17T02:11:16.691380186Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-j6pl5_05b56857-a0f7-456b-a198-2eadf300625f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1","pid":1569,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1/rootfs","created":"2021-08-17T02:11:16.810715576Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernet
es.cri.sandbox-id":"a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2","pid":917,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2/rootfs","created":"2021-08-17T02:10:52.924982147Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-20210817021007-1554185_0ca1ece4f336742d796dd3951c235ff2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46","pid":965,"status":"running","bundle":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46/rootfs","created":"2021-08-17T02:10:52.976256055Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-20210817021007-1554185_af2969cdb2ca0145027b6cf2e1da9f5d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f","pid":1501,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f/rootfs","created":"2021-08-
17T02:11:16.686493064Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-5crrc_ff33274b-c870-4110-af24-a4056969c55a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799","pid":953,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799/rootfs","created":"2021-08-17T02:10:52.955916107Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-2021081702100
7-1554185_cfd18c863a995943023d977afa17770a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231","pid":1149,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231/rootfs","created":"2021-08-17T02:10:53.273710526Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d60"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021","pid":1929,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021","rootfs":"/run/containerd/io.containerd.runtime.v2.
task/k8s.io/d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021/rootfs","created":"2021-08-17T02:12:05.994010101Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-hwn2j_f70985d7-d2b7-408b-9d54-8d6c0b83ab1b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d60","pid":1014,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d60","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d60/rootfs","created":"2021-08-17T02:10:53.04222091Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d
60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-20210817021007-1554185_81e4d679ba718c5a1e1a22193ffc109a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9","pid":1991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9/rootfs","created":"2021-08-17T02:12:06.092583573Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6","pid":1051,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45d89600a90
429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6/rootfs","created":"2021-08-17T02:10:53.085902081Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089","pid":1576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089/rootfs","created":"2021-08-17T02:11:16.908297558Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.
cri.sandbox-id":"3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d"},"owner":"root"}]
	I0817 02:12:49.979739 1581051 cri.go:113] list returned 16 containers
	I0817 02:12:49.979747 1581051 cri.go:116] container: {ID:097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537 Status:running}
	I0817 02:12:49.979756 1581051 cri.go:118] skipping 097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537 - not in ps
	I0817 02:12:49.979760 1581051 cri.go:116] container: {ID:2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48 Status:running}
	I0817 02:12:49.979774 1581051 cri.go:122] skipping {2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48 running}: state = "running", want "paused"
	I0817 02:12:49.979783 1581051 cri.go:116] container: {ID:22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d Status:running}
	I0817 02:12:49.979788 1581051 cri.go:122] skipping {22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d running}: state = "running", want "paused"
	I0817 02:12:49.979793 1581051 cri.go:116] container: {ID:2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1 Status:running}
	I0817 02:12:49.979800 1581051 cri.go:122] skipping {2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1 running}: state = "running", want "paused"
	I0817 02:12:49.979805 1581051 cri.go:116] container: {ID:3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d Status:running}
	I0817 02:12:49.979810 1581051 cri.go:118] skipping 3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d - not in ps
	I0817 02:12:49.979813 1581051 cri.go:116] container: {ID:50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1 Status:running}
	I0817 02:12:49.979818 1581051 cri.go:122] skipping {50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1 running}: state = "running", want "paused"
	I0817 02:12:49.979823 1581051 cri.go:116] container: {ID:9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2 Status:running}
	I0817 02:12:49.979828 1581051 cri.go:118] skipping 9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2 - not in ps
	I0817 02:12:49.979831 1581051 cri.go:116] container: {ID:9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46 Status:running}
	I0817 02:12:49.979836 1581051 cri.go:118] skipping 9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46 - not in ps
	I0817 02:12:49.979839 1581051 cri.go:116] container: {ID:a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f Status:running}
	I0817 02:12:49.979844 1581051 cri.go:118] skipping a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f - not in ps
	I0817 02:12:49.979847 1581051 cri.go:116] container: {ID:af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799 Status:running}
	I0817 02:12:49.979852 1581051 cri.go:118] skipping af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799 - not in ps
	I0817 02:12:49.979855 1581051 cri.go:116] container: {ID:c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231 Status:running}
	I0817 02:12:49.979860 1581051 cri.go:122] skipping {c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231 running}: state = "running", want "paused"
	I0817 02:12:49.979864 1581051 cri.go:116] container: {ID:d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021 Status:running}
	I0817 02:12:49.979869 1581051 cri.go:118] skipping d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021 - not in ps
	I0817 02:12:49.979874 1581051 cri.go:116] container: {ID:d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d60 Status:running}
	I0817 02:12:49.979879 1581051 cri.go:118] skipping d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d60 - not in ps
	I0817 02:12:49.979882 1581051 cri.go:116] container: {ID:dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9 Status:running}
	I0817 02:12:49.979887 1581051 cri.go:122] skipping {dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9 running}: state = "running", want "paused"
	I0817 02:12:49.979891 1581051 cri.go:116] container: {ID:f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6 Status:running}
	I0817 02:12:49.979896 1581051 cri.go:122] skipping {f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6 running}: state = "running", want "paused"
	I0817 02:12:49.979900 1581051 cri.go:116] container: {ID:f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089 Status:running}
	I0817 02:12:49.979905 1581051 cri.go:122] skipping {f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089 running}: state = "running", want "paused"
	I0817 02:12:49.979943 1581051 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 02:12:49.985942 1581051 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 02:12:49.985950 1581051 kubeadm.go:600] restartCluster start
	I0817 02:12:49.985987 1581051 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 02:12:49.991392 1581051 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:12:49.992237 1581051 kubeconfig.go:93] found "functional-20210817021007-1554185" server: "https://192.168.49.2:8441"
	I0817 02:12:49.994163 1581051 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 02:12:49.999939 1581051 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-08-17 02:10:36.950558563 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-08-17 02:12:49.669231826 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0817 02:12:49.999949 1581051 kubeadm.go:1032] stopping kube-system containers ...
	I0817 02:12:49.999957 1581051 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 02:12:49.999994 1581051 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:12:50.022390 1581051 cri.go:76] found id: "dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9"
	I0817 02:12:50.022402 1581051 cri.go:76] found id: "2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1"
	I0817 02:12:50.022407 1581051 cri.go:76] found id: "f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089"
	I0817 02:12:50.022411 1581051 cri.go:76] found id: "50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1"
	I0817 02:12:50.022415 1581051 cri.go:76] found id: "c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231"
	I0817 02:12:50.022420 1581051 cri.go:76] found id: "22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d"
	I0817 02:12:50.022423 1581051 cri.go:76] found id: "2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48"
	I0817 02:12:50.022427 1581051 cri.go:76] found id: "f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6"
	I0817 02:12:50.022431 1581051 cri.go:76] found id: ""
	I0817 02:12:50.022435 1581051 cri.go:221] Stopping containers: [dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9 2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1 f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089 50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1 c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231 22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d 2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48 f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6]
	I0817 02:12:50.022471 1581051 ssh_runner.go:149] Run: which crictl
	I0817 02:12:50.024996 1581051 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9 2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1 f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089 50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1 c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231 22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d 2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48 f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6
	I0817 02:12:50.399512 1581051 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 02:12:50.465869 1581051 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 02:12:50.472097 1581051 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 17 02:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 17 02:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2071 Aug 17 02:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 17 02:10 /etc/kubernetes/scheduler.conf
	
	I0817 02:12:50.472133 1581051 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0817 02:12:50.478010 1581051 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0817 02:12:50.484019 1581051 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0817 02:12:50.489560 1581051 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:12:50.489592 1581051 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 02:12:50.495736 1581051 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0817 02:12:50.501202 1581051 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:12:50.501236 1581051 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 02:12:50.506576 1581051 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 02:12:50.512296 1581051 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 02:12:50.512304 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:12:50.574426 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:12:53.312237 1581051 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.737792739s)
	I0817 02:12:53.312253 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:12:53.483180 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:12:53.590437 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:12:53.656853 1581051 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:12:53.656902 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:54.168420 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:54.668486 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:55.168561 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:55.668575 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:56.168151 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:56.667899 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:57.168127 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:57.668107 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:58.168117 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:58.668124 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:59.168110 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:59.667945 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:00.168635 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:00.667887 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:01.167918 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:01.667971 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:02.168125 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:02.667896 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:03.167890 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:03.191872 1581051 api_server.go:70] duration metric: took 9.535019578s to wait for apiserver process to appear ...
	I0817 02:13:03.191883 1581051 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:13:03.191891 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:08.192142 1581051 api_server.go:255] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 02:13:08.692786 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:09.649400 1581051 api_server.go:265] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 02:13:09.649413 1581051 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 02:13:09.692585 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:09.747515 1581051 api_server.go:265] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 02:13:09.747528 1581051 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 02:13:10.193053 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:10.201061 1581051 api_server.go:265] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 02:13:10.201072 1581051 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 02:13:10.692292 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:10.700691 1581051 api_server.go:265] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 02:13:10.700705 1581051 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 02:13:11.192313 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:11.200697 1581051 api_server.go:265] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0817 02:13:11.213450 1581051 api_server.go:139] control plane version: v1.21.3
	I0817 02:13:11.213461 1581051 api_server.go:129] duration metric: took 8.02157225s to wait for apiserver health ...
	I0817 02:13:11.213468 1581051 cni.go:93] Creating CNI manager for ""
	I0817 02:13:11.213478 1581051 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:13:11.215323 1581051 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:13:11.215392 1581051 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:13:11.218521 1581051 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 02:13:11.218528 1581051 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:13:11.238945 1581051 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:13:11.504341 1581051 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:13:11.514704 1581051 system_pods.go:59] 8 kube-system pods found
	I0817 02:13:11.514721 1581051 system_pods.go:61] "coredns-558bd4d5db-hwn2j" [f70985d7-d2b7-408b-9d54-8d6c0b83ab1b] Running
	I0817 02:13:11.514726 1581051 system_pods.go:61] "etcd-functional-20210817021007-1554185" [c22ede17-13df-4f29-9c3a-172efe4e4b09] Running
	I0817 02:13:11.514729 1581051 system_pods.go:61] "kindnet-j6pl5" [05b56857-a0f7-456b-a198-2eadf300625f] Running
	I0817 02:13:11.514734 1581051 system_pods.go:61] "kube-apiserver-functional-20210817021007-1554185" [f85b64df-d133-4ace-8fdf-6f4282916df8] Pending
	I0817 02:13:11.514738 1581051 system_pods.go:61] "kube-controller-manager-functional-20210817021007-1554185" [14b3db86-2bc5-4092-8874-61a165b71e45] Running
	I0817 02:13:11.514745 1581051 system_pods.go:61] "kube-proxy-5crrc" [ff33274b-c870-4110-af24-a4056969c55a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 02:13:11.514751 1581051 system_pods.go:61] "kube-scheduler-functional-20210817021007-1554185" [c8ef8d45-0d8a-475e-96c6-737097845dcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 02:13:11.514757 1581051 system_pods.go:61] "storage-provisioner" [f757e77c-ddf8-4d74-8754-424b9f0da712] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 02:13:11.514763 1581051 system_pods.go:74] duration metric: took 10.412893ms to wait for pod list to return data ...
	I0817 02:13:11.514769 1581051 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:13:11.517921 1581051 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:13:11.517934 1581051 node_conditions.go:123] node cpu capacity is 2
	I0817 02:13:11.517944 1581051 node_conditions.go:105] duration metric: took 3.171694ms to run NodePressure ...
	I0817 02:13:11.517956 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:13:11.851566 1581051 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0817 02:13:11.855266 1581051 kubeadm.go:746] kubelet initialised
	I0817 02:13:11.855273 1581051 kubeadm.go:747] duration metric: took 3.696136ms waiting for restarted kubelet to initialise ...
	I0817 02:13:11.855278 1581051 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:13:11.863764 1581051 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-hwn2j" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:11.883989 1581051 pod_ready.go:92] pod "coredns-558bd4d5db-hwn2j" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:11.883997 1581051 pod_ready.go:81] duration metric: took 20.221238ms waiting for pod "coredns-558bd4d5db-hwn2j" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:11.884012 1581051 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:11.888258 1581051 pod_ready.go:92] pod "etcd-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:11.888264 1581051 pod_ready.go:81] duration metric: took 4.246111ms waiting for pod "etcd-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:11.888273 1581051 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:13.897326 1581051 pod_ready.go:102] pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"False"
	I0817 02:13:16.397084 1581051 pod_ready.go:102] pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"False"
	I0817 02:13:18.397545 1581051 pod_ready.go:92] pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:18.397565 1581051 pod_ready.go:81] duration metric: took 6.509284337s waiting for pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:18.397575 1581051 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.406280 1581051 pod_ready.go:92] pod "kube-controller-manager-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:19.406300 1581051 pod_ready.go:81] duration metric: took 1.008717521s waiting for pod "kube-controller-manager-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.406309 1581051 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5crrc" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.410150 1581051 pod_ready.go:92] pod "kube-proxy-5crrc" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:19.410156 1581051 pod_ready.go:81] duration metric: took 3.841381ms waiting for pod "kube-proxy-5crrc" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.410163 1581051 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.413776 1581051 pod_ready.go:92] pod "kube-scheduler-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:19.413782 1581051 pod_ready.go:81] duration metric: took 3.613067ms waiting for pod "kube-scheduler-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.413791 1581051 pod_ready.go:38] duration metric: took 7.558504477s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:13:19.413805 1581051 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:13:19.423851 1581051 ops.go:34] apiserver oom_adj: -16
	I0817 02:13:19.423859 1581051 kubeadm.go:604] restartCluster took 29.437904549s
	I0817 02:13:19.423864 1581051 kubeadm.go:392] StartCluster complete in 29.50308089s
	I0817 02:13:19.423877 1581051 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:13:19.423959 1581051 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:13:19.424616 1581051 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:13:19.428572 1581051 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20210817021007-1554185" rescaled to 1
	I0817 02:13:19.428601 1581051 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 02:13:19.430756 1581051 out.go:177] * Verifying Kubernetes components...
	I0817 02:13:19.430807 1581051 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:13:19.428702 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:13:19.428904 1581051 config.go:177] Loaded profile config "functional-20210817021007-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:13:19.428916 1581051 addons.go:342] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0817 02:13:19.430917 1581051 addons.go:59] Setting storage-provisioner=true in profile "functional-20210817021007-1554185"
	I0817 02:13:19.430929 1581051 addons.go:135] Setting addon storage-provisioner=true in "functional-20210817021007-1554185"
	W0817 02:13:19.430933 1581051 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:13:19.430952 1581051 host.go:66] Checking if "functional-20210817021007-1554185" exists ...
	I0817 02:13:19.430968 1581051 addons.go:59] Setting default-storageclass=true in profile "functional-20210817021007-1554185"
	I0817 02:13:19.430981 1581051 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20210817021007-1554185"
	I0817 02:13:19.431250 1581051 cli_runner.go:115] Run: docker container inspect functional-20210817021007-1554185 --format={{.State.Status}}
	I0817 02:13:19.431410 1581051 cli_runner.go:115] Run: docker container inspect functional-20210817021007-1554185 --format={{.State.Status}}
	I0817 02:13:19.454735 1581051 node_ready.go:35] waiting up to 6m0s for node "functional-20210817021007-1554185" to be "Ready" ...
	I0817 02:13:19.462941 1581051 node_ready.go:49] node "functional-20210817021007-1554185" has status "Ready":"True"
	I0817 02:13:19.462947 1581051 node_ready.go:38] duration metric: took 8.196554ms waiting for node "functional-20210817021007-1554185" to be "Ready" ...
	I0817 02:13:19.462954 1581051 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:13:19.468049 1581051 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-hwn2j" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.500071 1581051 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:13:19.500160 1581051 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:13:19.500167 1581051 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:13:19.500213 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:13:19.501777 1581051 addons.go:135] Setting addon default-storageclass=true in "functional-20210817021007-1554185"
	W0817 02:13:19.501786 1581051 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:13:19.501809 1581051 host.go:66] Checking if "functional-20210817021007-1554185" exists ...
	I0817 02:13:19.502243 1581051 cli_runner.go:115] Run: docker container inspect functional-20210817021007-1554185 --format={{.State.Status}}
	I0817 02:13:19.551738 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:13:19.583914 1581051 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:13:19.583924 1581051 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:13:19.583971 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:13:19.607059 1581051 pod_ready.go:92] pod "coredns-558bd4d5db-hwn2j" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:19.607067 1581051 pod_ready.go:81] duration metric: took 139.006758ms waiting for pod "coredns-558bd4d5db-hwn2j" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.607076 1581051 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.633163 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:13:19.664126 1581051 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 02:13:19.688471 1581051 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:13:19.760696 1581051 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:13:20.004474 1581051 pod_ready.go:92] pod "etcd-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:20.004483 1581051 pod_ready.go:81] duration metric: took 397.400219ms waiting for pod "etcd-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:20.004496 1581051 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:20.074050 1581051 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 02:13:20.074087 1581051 addons.go:344] enableAddons completed in 645.17316ms
	I0817 02:13:20.396338 1581051 pod_ready.go:92] pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:20.396346 1581051 pod_ready.go:81] duration metric: took 391.843212ms waiting for pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:20.396356 1581051 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:20.795722 1581051 pod_ready.go:92] pod "kube-controller-manager-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:20.795730 1581051 pod_ready.go:81] duration metric: took 399.367117ms waiting for pod "kube-controller-manager-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:20.795740 1581051 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5crrc" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:21.195905 1581051 pod_ready.go:92] pod "kube-proxy-5crrc" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:21.195912 1581051 pod_ready.go:81] duration metric: took 400.166288ms waiting for pod "kube-proxy-5crrc" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:21.195920 1581051 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:21.596206 1581051 pod_ready.go:92] pod "kube-scheduler-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:21.596215 1581051 pod_ready.go:81] duration metric: took 400.287338ms waiting for pod "kube-scheduler-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:21.596224 1581051 pod_ready.go:38] duration metric: took 2.133261068s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:13:21.596237 1581051 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:13:21.596280 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:21.609169 1581051 api_server.go:70] duration metric: took 2.180548488s to wait for apiserver process to appear ...
	I0817 02:13:21.609178 1581051 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:13:21.609186 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:21.617657 1581051 api_server.go:265] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0817 02:13:21.618433 1581051 api_server.go:139] control plane version: v1.21.3
	I0817 02:13:21.618441 1581051 api_server.go:129] duration metric: took 9.259181ms to wait for apiserver health ...
	I0817 02:13:21.618447 1581051 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:13:21.799161 1581051 system_pods.go:59] 8 kube-system pods found
	I0817 02:13:21.799173 1581051 system_pods.go:61] "coredns-558bd4d5db-hwn2j" [f70985d7-d2b7-408b-9d54-8d6c0b83ab1b] Running
	I0817 02:13:21.799177 1581051 system_pods.go:61] "etcd-functional-20210817021007-1554185" [c22ede17-13df-4f29-9c3a-172efe4e4b09] Running
	I0817 02:13:21.799181 1581051 system_pods.go:61] "kindnet-j6pl5" [05b56857-a0f7-456b-a198-2eadf300625f] Running
	I0817 02:13:21.799186 1581051 system_pods.go:61] "kube-apiserver-functional-20210817021007-1554185" [f85b64df-d133-4ace-8fdf-6f4282916df8] Running
	I0817 02:13:21.799190 1581051 system_pods.go:61] "kube-controller-manager-functional-20210817021007-1554185" [14b3db86-2bc5-4092-8874-61a165b71e45] Running
	I0817 02:13:21.799194 1581051 system_pods.go:61] "kube-proxy-5crrc" [ff33274b-c870-4110-af24-a4056969c55a] Running
	I0817 02:13:21.799198 1581051 system_pods.go:61] "kube-scheduler-functional-20210817021007-1554185" [c8ef8d45-0d8a-475e-96c6-737097845dcb] Running
	I0817 02:13:21.799202 1581051 system_pods.go:61] "storage-provisioner" [f757e77c-ddf8-4d74-8754-424b9f0da712] Running
	I0817 02:13:21.799206 1581051 system_pods.go:74] duration metric: took 180.754796ms to wait for pod list to return data ...
	I0817 02:13:21.799211 1581051 default_sa.go:34] waiting for default service account to be created ...
	I0817 02:13:21.998510 1581051 default_sa.go:45] found service account: "default"
	I0817 02:13:21.998525 1581051 default_sa.go:55] duration metric: took 199.309737ms for default service account to be created ...
	I0817 02:13:21.998531 1581051 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 02:13:22.199290 1581051 system_pods.go:86] 8 kube-system pods found
	I0817 02:13:22.199304 1581051 system_pods.go:89] "coredns-558bd4d5db-hwn2j" [f70985d7-d2b7-408b-9d54-8d6c0b83ab1b] Running
	I0817 02:13:22.199310 1581051 system_pods.go:89] "etcd-functional-20210817021007-1554185" [c22ede17-13df-4f29-9c3a-172efe4e4b09] Running
	I0817 02:13:22.199314 1581051 system_pods.go:89] "kindnet-j6pl5" [05b56857-a0f7-456b-a198-2eadf300625f] Running
	I0817 02:13:22.199319 1581051 system_pods.go:89] "kube-apiserver-functional-20210817021007-1554185" [f85b64df-d133-4ace-8fdf-6f4282916df8] Running
	I0817 02:13:22.199326 1581051 system_pods.go:89] "kube-controller-manager-functional-20210817021007-1554185" [14b3db86-2bc5-4092-8874-61a165b71e45] Running
	I0817 02:13:22.199330 1581051 system_pods.go:89] "kube-proxy-5crrc" [ff33274b-c870-4110-af24-a4056969c55a] Running
	I0817 02:13:22.199335 1581051 system_pods.go:89] "kube-scheduler-functional-20210817021007-1554185" [c8ef8d45-0d8a-475e-96c6-737097845dcb] Running
	I0817 02:13:22.199338 1581051 system_pods.go:89] "storage-provisioner" [f757e77c-ddf8-4d74-8754-424b9f0da712] Running
	I0817 02:13:22.199343 1581051 system_pods.go:126] duration metric: took 200.809102ms to wait for k8s-apps to be running ...
	I0817 02:13:22.199349 1581051 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 02:13:22.199394 1581051 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:13:22.208388 1581051 system_svc.go:56] duration metric: took 9.035372ms WaitForService to wait for kubelet.
	I0817 02:13:22.208397 1581051 kubeadm.go:547] duration metric: took 2.77978148s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 02:13:22.208415 1581051 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:13:22.396720 1581051 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:13:22.396730 1581051 node_conditions.go:123] node cpu capacity is 2
	I0817 02:13:22.396740 1581051 node_conditions.go:105] duration metric: took 188.320784ms to run NodePressure ...
	I0817 02:13:22.396748 1581051 start.go:231] waiting for startup goroutines ...
	I0817 02:13:22.448536 1581051 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 02:13:22.450801 1581051 out.go:177] * Done! kubectl is now configured to use "functional-20210817021007-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	fa89921d92ecf       72565bf5bbedf       3 minutes ago       Running             echoserver-arm            0                   8579f83b940bc
	a134cfa72ef30       ba04bb24b9575       3 minutes ago       Running             storage-provisioner       1                   097a0e56d670f
	12cbd98080c41       4ea38350a1beb       3 minutes ago       Running             kube-proxy                1                   a18545e5e3a38
	2d043a197ec35       1a1f05a2cd7c2       3 minutes ago       Running             coredns                   1                   d307bb881660d
	d07341ce12e32       f37b7c809e5dc       3 minutes ago       Running             kindnet-cni               1                   3c1724c3b0db2
	82907d1c8abec       44a6d50ef170d       3 minutes ago       Running             kube-apiserver            0                   49cabe38d7e82
	a5de4bf70d6b1       cb310ff289d79       3 minutes ago       Running             kube-controller-manager   1                   9f018263acf74
	9957a9c6e7457       05b738aa1bc63       3 minutes ago       Running             etcd                      1                   d6d60a35ea281
	1e6ff32edae34       31a3b96cefc1e       3 minutes ago       Running             kube-scheduler            1                   af2ad6c437351
	dcb3485081788       1a1f05a2cd7c2       4 minutes ago       Exited              coredns                   0                   d307bb881660d
	2e48c5b1b49b7       ba04bb24b9575       4 minutes ago       Exited              storage-provisioner       0                   097a0e56d670f
	f5ccf8d4a795a       f37b7c809e5dc       5 minutes ago       Exited              kindnet-cni               0                   3c1724c3b0db2
	50882d637a8db       4ea38350a1beb       5 minutes ago       Exited              kube-proxy                0                   a18545e5e3a38
	c4691bd6b5911       05b738aa1bc63       5 minutes ago       Exited              etcd                      0                   d6d60a35ea281
	22d1f79cab38d       31a3b96cefc1e       5 minutes ago       Exited              kube-scheduler            0                   af2ad6c437351
	2003791d6509e       cb310ff289d79       5 minutes ago       Exited              kube-controller-manager   0                   9f018263acf74
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 02:10:09 UTC, end at Tue 2021-08-17 02:16:51 UTC. --
	Aug 17 02:13:50 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:50.291547441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:sp-pod,Uid:42fb579e-9919-4aa5-a93f-5e2576652c96,Namespace:default,Attempt:0,} returns sandbox id \"70f2af12c20acb4034e4da2ad71fc0658fce23140aa6aa74244d383e6a3894ef\""
	Aug 17 02:13:50 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:50.848037057Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:13:50 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:50.848873922Z" level=info msg="PullImage \"nginx:latest\""
	Aug 17 02:13:51 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:51.751220551Z" level=error msg="PullImage \"nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:14:01 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:14:01.932805925Z" level=info msg="RemoveContainer for \"f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6\""
	Aug 17 02:14:01 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:14:01.938197288Z" level=info msg="RemoveContainer for \"f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6\" returns successfully"
	Aug 17 02:14:01 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:14:01.939950608Z" level=info msg="StopPodSandbox for \"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2\""
	Aug 17 02:14:01 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:14:01.940058061Z" level=info msg="TearDown network for sandbox \"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2\" successfully"
	Aug 17 02:14:01 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:14:01.940070295Z" level=info msg="StopPodSandbox for \"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2\" returns successfully"
	Aug 17 02:14:01 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:14:01.940430160Z" level=info msg="RemovePodSandbox for \"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2\""
	Aug 17 02:14:01 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:14:01.946026453Z" level=info msg="RemovePodSandbox \"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2\" returns successfully"
	Aug 17 02:14:02 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:14:02.944669088Z" level=info msg="PullImage \"nginx:latest\""
	Aug 17 02:14:03 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:14:03.842295149Z" level=error msg="PullImage \"nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:14:16 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:14:16.944650874Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 17 02:14:18 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:14:18.030610289Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:14:26 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:14:26.944446850Z" level=info msg="PullImage \"nginx:latest\""
	Aug 17 02:14:27 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:14:27.840099884Z" level=error msg="PullImage \"nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:15:01 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:15:01.945601267Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 17 02:15:02 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:15:02.973936150Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:15:07 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:15:07.945024168Z" level=info msg="PullImage \"nginx:latest\""
	Aug 17 02:15:08 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:15:08.939631232Z" level=error msg="PullImage \"nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:16:29 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:16:29.944717860Z" level=info msg="PullImage \"nginx:latest\""
	Aug 17 02:16:30 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:16:30.936807804Z" level=error msg="PullImage \"nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:7ef3ca6ca846a10787f98fd2722d6e4054a17b37981a3ca273207a792731aebe: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 17 02:16:37 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:16:37.944511497Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 17 02:16:38 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:16:38.830138177Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	
	* 
	* ==> coredns [2d043a197ec35b03025658e8a189e9a49ac30056e2358b98e5e0c85615fbbde9] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> coredns [dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20210817021007-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-20210817021007-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=functional-20210817021007-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T02_11_02_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 02:10:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20210817021007-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 02:16:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 02:14:10 +0000   Tue, 17 Aug 2021 02:10:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 02:14:10 +0000   Tue, 17 Aug 2021 02:10:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 02:14:10 +0000   Tue, 17 Aug 2021 02:10:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 02:14:10 +0000   Tue, 17 Aug 2021 02:12:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20210817021007-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                7cd3ce4e-d107-428e-9bf7-b8e3aad5da0f
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6d98884d59-96mjk                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  default                     nginx-svc                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  default                     sp-pod                                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-558bd4d5db-hwn2j                                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m35s
	  kube-system                 etcd-functional-20210817021007-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m40s
	  kube-system                 kindnet-j6pl5                                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m35s
	  kube-system                 kube-apiserver-functional-20210817021007-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kube-controller-manager-functional-20210817021007-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 kube-proxy-5crrc                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  kube-system                 kube-scheduler-functional-20210817021007-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  5m59s (x5 over 5m59s)  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m59s (x4 over 5m59s)  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s (x4 over 5m59s)  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m41s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m41s                  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s                  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s                  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m35s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                4m51s                  kubelet     Node functional-20210817021007-1554185 status is now: NodeReady
	  Normal  Starting                 3m50s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s (x8 over 3m49s)  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x8 over 3m49s)  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x7 over 3m49s)  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 3m39s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug17 01:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [9957a9c6e74576996352d46594efafef2b23f6669b100da5c2de39c511314a45] <==
	* 2021-08-17 02:13:04.030863 I | embed: ready to serve client requests
	2021-08-17 02:13:04.032102 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 02:13:12.669195 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:13:18.913327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:13:28.912570 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:13:38.913042 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:13:48.913281 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:13:58.912722 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:14:08.913138 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:14:18.912928 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:14:28.913152 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:14:38.912484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:14:48.912776 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:14:58.912857 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:15:08.912921 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:15:18.912706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:15:28.913290 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:15:38.912616 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:15:48.913429 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:15:58.913005 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:16:08.912862 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:16:18.912606 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:16:28.913422 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:16:38.912957 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:16:48.912939 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231] <==
	* 2021-08-17 02:10:53.363552 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/17 02:10:53 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 02:10:53 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 02:10:53 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 02:10:53 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 02:10:53 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 02:10:53.908924 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 02:10:53.909644 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 02:10:53.909803 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 02:10:53.909893 I | etcdserver: published {Name:functional-20210817021007-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 02:10:53.909977 I | embed: ready to serve client requests
	2021-08-17 02:10:53.911325 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 02:10:53.911642 I | embed: ready to serve client requests
	2021-08-17 02:10:53.914059 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 02:11:12.117266 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:11:15.942038 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:11:25.943037 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:11:35.942114 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:11:45.942457 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:11:55.942536 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:12:05.943054 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:12:15.942442 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:12:25.942138 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:12:35.942262 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:12:45.943039 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  02:16:51 up  9:59,  0 users,  load average: 0.59, 0.68, 0.91
	Linux functional-20210817021007-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [82907d1c8abecfc9bae9ff0d65d732ca0318989aa5efac9548672a44a4571aff] <==
	* I0817 02:13:11.757226       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 02:13:11.830543       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 02:13:11.839010       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0817 02:13:22.995014       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 02:13:23.001768       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 02:13:31.651929       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0817 02:13:31.747403       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0817 02:13:37.947687       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:13:37.947745       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:13:37.947760       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:14:12.092514       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:14:12.092569       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:14:12.092596       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:14:44.441053       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:14:44.441094       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:14:44.441103       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:15:26.986772       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:15:26.986831       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:15:26.986839       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:15:58.454958       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:15:58.455000       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:15:58.455008       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:16:33.535324       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:16:33.535367       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:16:33.535376       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48] <==
	* I0817 02:11:16.141688       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0817 02:11:16.142711       1 event.go:291] "Event occurred" object="functional-20210817021007-1554185" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20210817021007-1554185 event: Registered Node functional-20210817021007-1554185 in Controller"
	I0817 02:11:16.149460       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0817 02:11:16.149639       1 shared_informer.go:247] Caches are synced for endpoint 
	I0817 02:11:16.150864       1 shared_informer.go:247] Caches are synced for PV protection 
	I0817 02:11:16.158931       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0817 02:11:16.170851       1 shared_informer.go:247] Caches are synced for service account 
	W0817 02:11:16.188708       1 endpointslice_controller.go:305] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	I0817 02:11:16.190083       1 event.go:291] "Event occurred" object="kube-system/etcd-functional-20210817021007-1554185" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 02:11:16.190200       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-functional-20210817021007-1554185" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 02:11:16.190419       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-hwn2j"
	I0817 02:11:16.243401       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j6pl5"
	I0817 02:11:16.243581       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5crrc"
	I0817 02:11:16.305759       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0817 02:11:16.319222       1 shared_informer.go:247] Caches are synced for disruption 
	I0817 02:11:16.322837       1 disruption.go:371] Sending events to api server.
	E0817 02:11:16.325730       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"9034d0c5-ad4f-446a-870e-a81158502e06", ResourceVersion:"415", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764763062, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001fda630), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001fda648)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001fda660), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001fda678)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001f8f620), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, Crea
tionTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001fda690), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.Flex
VolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001fda6a8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVo
lumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CS
IVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001fda6c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*
v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001f8f640)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001f8f680)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amou
nt{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropa
gation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001fb5560), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001fccfe8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40005051f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(ni
l), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001fdf7c0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001fcd030)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetConditio
n(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0817 02:11:16.342897       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:11:16.409582       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:11:16.486937       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0817 02:11:16.502939       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-59tz7"
	I0817 02:11:16.766228       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:11:16.849104       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:11:16.849132       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 02:12:01.146582       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-controller-manager [a5de4bf70d6b15f0690c7fa47a8bbe220d6a9421a270ca19ae0415d1cce3e279] <==
	* I0817 02:13:22.962456       1 event.go:291] "Event occurred" object="functional-20210817021007-1554185" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20210817021007-1554185 event: Registered Node functional-20210817021007-1554185 in Controller"
	I0817 02:13:22.964821       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0817 02:13:22.965113       1 shared_informer.go:247] Caches are synced for GC 
	I0817 02:13:22.965709       1 shared_informer.go:247] Caches are synced for disruption 
	I0817 02:13:22.965816       1 disruption.go:371] Sending events to api server.
	I0817 02:13:22.969223       1 shared_informer.go:247] Caches are synced for TTL 
	I0817 02:13:22.976409       1 shared_informer.go:247] Caches are synced for deployment 
	I0817 02:13:23.015704       1 shared_informer.go:247] Caches are synced for expand 
	I0817 02:13:23.026997       1 shared_informer.go:247] Caches are synced for stateful set 
	I0817 02:13:23.039054       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0817 02:13:23.039223       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0817 02:13:23.039359       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0817 02:13:23.039386       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0817 02:13:23.046604       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0817 02:13:23.087368       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:13:23.131120       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0817 02:13:23.164808       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:13:23.265154       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0817 02:13:23.651349       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:13:23.666878       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:13:23.666894       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 02:13:31.655256       1 event.go:291] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-6d98884d59 to 1"
	I0817 02:13:31.720888       1 event.go:291] "Event occurred" object="default/hello-node-6d98884d59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-6d98884d59-96mjk"
	I0817 02:13:49.555512       1 event.go:291] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0817 02:13:49.555891       1 event.go:291] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	
	* 
	* ==> kube-proxy [12cbd98080c419ded9a5dbf4fc964388effa7fa297ddf5a27d30469abf42a1af] <==
	* I0817 02:13:12.053736       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 02:13:12.053785       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 02:13:12.053805       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 02:13:12.075594       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 02:13:12.075619       1 server_others.go:212] Using iptables Proxier.
	I0817 02:13:12.075628       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 02:13:12.075639       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 02:13:12.076073       1 server.go:643] Version: v1.21.3
	I0817 02:13:12.076689       1 config.go:315] Starting service config controller
	I0817 02:13:12.076707       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 02:13:12.076793       1 config.go:224] Starting endpoint slice config controller
	I0817 02:13:12.076806       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 02:13:12.086115       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 02:13:12.091151       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 02:13:12.177451       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 02:13:12.177457       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1] <==
	* I0817 02:11:16.894705       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 02:11:16.894942       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 02:11:16.895032       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 02:11:16.928213       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 02:11:16.928342       1 server_others.go:212] Using iptables Proxier.
	I0817 02:11:16.928441       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 02:11:16.928521       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 02:11:16.931724       1 server.go:643] Version: v1.21.3
	I0817 02:11:16.932480       1 config.go:315] Starting service config controller
	I0817 02:11:16.932578       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 02:11:16.932686       1 config.go:224] Starting endpoint slice config controller
	I0817 02:11:16.932767       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 02:11:16.937956       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 02:11:16.940369       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 02:11:17.032795       1 shared_informer.go:247] Caches are synced for service config 
	I0817 02:11:17.032856       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [1e6ff32edae34dda2c87416ed842a9831a9bd160752a1210d740274e650d265d] <==
	* I0817 02:13:05.087351       1 serving.go:347] Generated self-signed cert in-memory
	W0817 02:13:09.654538       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0817 02:13:09.654571       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 02:13:09.654580       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 02:13:09.654587       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 02:13:09.758659       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0817 02:13:09.759223       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 02:13:09.770760       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 02:13:09.761731       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0817 02:13:09.871723       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d] <==
	* W0817 02:10:59.640856       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 02:10:59.640862       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 02:10:59.698489       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0817 02:10:59.698866       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0817 02:10:59.698904       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 02:10:59.713744       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0817 02:10:59.721343       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 02:10:59.721495       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 02:10:59.721621       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:10:59.721827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:10:59.722390       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 02:10:59.722079       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 02:10:59.722152       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:10:59.722218       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:10:59.722284       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:10:59.722334       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:10:59.726916       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 02:10:59.727160       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:10:59.727791       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:10:59.728042       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 02:11:00.548035       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:11:00.548272       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:11:00.689047       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 02:11:00.716644       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 02:11:02.514618       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 02:10:09 UTC, end at Tue 2021-08-17 02:16:51 UTC. --
	Aug 17 02:15:08 functional-20210817021007-1554185 kubelet[3562]: E0817 02:15:08.939850    3562 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Aug 17 02:15:08 functional-20210817021007-1554185 kubelet[3562]: E0817 02:15:08.939917    3562 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Aug 17 02:15:08 functional-20210817021007-1554185 kubelet[3562]: E0817 02:15:08.940002    3562 kuberuntime_manager.go:864] container &Container{Name:myfrontend,Image:nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mcwlv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod sp-pod_default(42fb579e-9919-4aa5-a93f-5e2576652c96): ErrImagePull: rpc error: code = Unknown desc = faile
d to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 17 02:15:08 functional-20210817021007-1554185 kubelet[3562]: E0817 02:15:08.940063    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID=42fb579e-9919-4aa5-a93f-5e2576652c96
	Aug 17 02:15:17 functional-20210817021007-1554185 kubelet[3562]: E0817 02:15:17.952445    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx-svc" podUID=842360d0-8253-4ab4-b42b-9940bf1090e0
	Aug 17 02:15:23 functional-20210817021007-1554185 kubelet[3562]: E0817 02:15:23.944876    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/sp-pod" podUID=42fb579e-9919-4aa5-a93f-5e2576652c96
	Aug 17 02:15:30 functional-20210817021007-1554185 kubelet[3562]: E0817 02:15:30.945343    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx-svc" podUID=842360d0-8253-4ab4-b42b-9940bf1090e0
	Aug 17 02:15:35 functional-20210817021007-1554185 kubelet[3562]: E0817 02:15:35.944967    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/sp-pod" podUID=42fb579e-9919-4aa5-a93f-5e2576652c96
	Aug 17 02:15:41 functional-20210817021007-1554185 kubelet[3562]: E0817 02:15:41.945084    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx-svc" podUID=842360d0-8253-4ab4-b42b-9940bf1090e0
	Aug 17 02:15:49 functional-20210817021007-1554185 kubelet[3562]: E0817 02:15:49.944831    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/sp-pod" podUID=42fb579e-9919-4aa5-a93f-5e2576652c96
	Aug 17 02:15:55 functional-20210817021007-1554185 kubelet[3562]: E0817 02:15:55.945339    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx-svc" podUID=842360d0-8253-4ab4-b42b-9940bf1090e0
	Aug 17 02:16:02 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:02.944281    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/sp-pod" podUID=42fb579e-9919-4aa5-a93f-5e2576652c96
	Aug 17 02:16:08 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:08.945062    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx-svc" podUID=842360d0-8253-4ab4-b42b-9940bf1090e0
	Aug 17 02:16:15 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:15.945115    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/sp-pod" podUID=42fb579e-9919-4aa5-a93f-5e2576652c96
	Aug 17 02:16:22 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:22.945012    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx-svc" podUID=842360d0-8253-4ab4-b42b-9940bf1090e0
	Aug 17 02:16:30 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:30.937048    3562 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:7ef3ca6ca846a10787f98fd2722d6e4054a17b37981a3ca273207a792731aebe: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Aug 17 02:16:30 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:30.937102    3562 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:7ef3ca6ca846a10787f98fd2722d6e4054a17b37981a3ca273207a792731aebe: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Aug 17 02:16:30 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:30.937190    3562 kuberuntime_manager.go:864] container &Container{Name:myfrontend,Image:nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mcwlv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod sp-pod_default(42fb579e-9919-4aa5-a93f-5e2576652c96): ErrImagePull: rpc error: code = Unknown desc = faile
d to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:7ef3ca6ca846a10787f98fd2722d6e4054a17b37981a3ca273207a792731aebe: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 17 02:16:30 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:30.937238    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:7ef3ca6ca846a10787f98fd2722d6e4054a17b37981a3ca273207a792731aebe: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID=42fb579e-9919-4aa5-a93f-5e2576652c96
	Aug 17 02:16:38 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:38.830321    3562 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:alpine"
	Aug 17 02:16:38 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:38.830396    3562 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:alpine"
	Aug 17 02:16:38 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:38.830473    3562 kuberuntime_manager.go:864] container &Container{Name:nginx,Image:nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-whc4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod nginx-svc_default(842360d0-8253-4ab4-b42b-9940bf1090e0): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack
image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 17 02:16:38 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:38.830524    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID=842360d0-8253-4ab4-b42b-9940bf1090e0
	Aug 17 02:16:42 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:42.949470    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/sp-pod" podUID=42fb579e-9919-4aa5-a93f-5e2576652c96
	Aug 17 02:16:49 functional-20210817021007-1554185 kubelet[3562]: E0817 02:16:49.947811    3562 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx-svc" podUID=842360d0-8253-4ab4-b42b-9940bf1090e0
	
	* 
	* ==> storage-provisioner [2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1] <==
	* I0817 02:12:06.128117       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 02:12:06.167270       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 02:12:06.167311       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 02:12:06.188392       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 02:12:06.188647       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20210817021007-1554185_912274ee-7890-41c4-9351-07c103e69cda!
	I0817 02:12:06.192793       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cdcc0882-50d6-400a-a02f-a68b7dbdaaf2", APIVersion:"v1", ResourceVersion:"515", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20210817021007-1554185_912274ee-7890-41c4-9351-07c103e69cda became leader
	I0817 02:12:06.289504       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20210817021007-1554185_912274ee-7890-41c4-9351-07c103e69cda!
	
	* 
	* ==> storage-provisioner [a134cfa72ef30019193e5918256bcb07ed51a78c56f83e577fe6b8e87008a846] <==
	* I0817 02:13:12.280450       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 02:13:12.329015       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 02:13:12.329558       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 02:13:29.951233       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 02:13:29.951393       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20210817021007-1554185_c54faadf-34aa-435d-b3d8-b964e33c46aa!
	I0817 02:13:29.952273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cdcc0882-50d6-400a-a02f-a68b7dbdaaf2", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20210817021007-1554185_c54faadf-34aa-435d-b3d8-b964e33c46aa became leader
	I0817 02:13:30.051952       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20210817021007-1554185_c54faadf-34aa-435d-b3d8-b964e33c46aa!
	I0817 02:13:49.552295       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0817 02:13:49.564242       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    48a1efff-62d8-4c0f-9347-f179c8966362 479 0 2021-08-17 02:11:17 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2021-08-17 02:11:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-77859ccc-1275-4857-84e9-7c2e5f8b693f &PersistentVolumeClaim{ObjectMeta:{myclaim  default  77859ccc-1275-4857-84e9-7c2e5f8b693f 707 0 2021-08-17 02:13:49 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2021-08-17 02:13:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2021-08-17 02:13:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim

                                                
                                                
	I0817 02:13:49.568009       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-77859ccc-1275-4857-84e9-7c2e5f8b693f" provisioned
	I0817 02:13:49.568044       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0817 02:13:49.568050       1 volume_store.go:212] Trying to save persistentvolume "pvc-77859ccc-1275-4857-84e9-7c2e5f8b693f"
	I0817 02:13:49.570768       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"77859ccc-1275-4857-84e9-7c2e5f8b693f", APIVersion:"v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0817 02:13:49.604381       1 volume_store.go:219] persistentvolume "pvc-77859ccc-1275-4857-84e9-7c2e5f8b693f" saved
	I0817 02:13:49.604620       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"77859ccc-1275-4857-84e9-7c2e5f8b693f", APIVersion:"v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-77859ccc-1275-4857-84e9-7c2e5f8b693f
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-20210817021007-1554185 -n functional-20210817021007-1554185
helpers_test.go:262: (dbg) Run:  kubectl --context functional-20210817021007-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: nginx-svc sp-pod
helpers_test.go:273: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context functional-20210817021007-1554185 describe pod nginx-svc sp-pod
helpers_test.go:281: (dbg) kubectl --context functional-20210817021007-1554185 describe pod nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:         nginx-svc
	Namespace:    default
	Priority:     0
	Node:         functional-20210817021007-1554185/192.168.49.2
	Start Time:   Tue, 17 Aug 2021 02:13:34 +0000
	Labels:       run=nginx-svc
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-whc4s (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-whc4s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  3m17s                  default-scheduler  Successfully assigned default/nginx-svc to functional-20210817021007-1554185
	  Warning  Failed     2m34s (x3 over 3m16s)  kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    111s (x4 over 3m17s)   kubelet            Pulling image "nginx:alpine"
	  Warning  Failed     110s (x4 over 3m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     110s                   kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     95s (x6 over 3m15s)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    82s (x7 over 3m15s)    kubelet            Back-off pulling image "nginx:alpine"
	
	
	Name:         sp-pod
	Namespace:    default
	Priority:     0
	Node:         functional-20210817021007-1554185/192.168.49.2
	Start Time:   Tue, 17 Aug 2021 02:13:49 +0000
	Labels:       test=storage-provisioner
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mcwlv (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-mcwlv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-20210817021007-1554185
	  Normal   Pulling    105s (x4 over 3m2s)  kubelet            Pulling image "nginx"
	  Warning  Failed     104s (x4 over 3m1s)  kubelet            Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     104s (x4 over 3m1s)  kubelet            Error: ErrImagePull
	  Warning  Failed     77s (x6 over 3m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    63s (x7 over 3m)     kubelet            Back-off pulling image "nginx"

                                                
                                                
-- /stdout --
helpers_test.go:284: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:285: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (188.25s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (5.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 image build -t localhost/my-image:functional-20210817021007-1554185 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-arm64 -p functional-20210817021007-1554185 image build -t localhost/my-image:functional-20210817021007-1554185 testdata/build: (2.3943688s)
functional_test.go:415: (dbg) Stderr: out/minikube-linux-arm64 -p functional-20210817021007-1554185 image build -t localhost/my-image:functional-20210817021007-1554185 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 77B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 ERROR: failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:49fe19ce9b78d2f7b8dbcbca928c73652dba2fe797fb078453f5601a4f49e499: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
------
> [internal] load metadata for docker.io/library/busybox:latest:
------
Dockerfile:1
--------------------
1 | >>> FROM busybox
2 |     RUN true
3 |     ADD content.txt /
--------------------
error: failed to solve: busybox: failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:49fe19ce9b78d2f7b8dbcbca928c73652dba2fe797fb078453f5601a4f49e499: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
functional_test.go:373: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210817021007-1554185 -- sudo crictl inspecti localhost/my-image:functional-20210817021007-1554185

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:373: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p functional-20210817021007-1554185 -- sudo crictl inspecti localhost/my-image:functional-20210817021007-1554185: exit status 1 (329.638985ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "localhost/my-image:functional-20210817021007-1554185" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:387: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210817021007-1554185 -- sudo crictl images

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:422: (dbg) images: 
-- stdout --
	IMAGE                                         TAG                                                IMAGE ID            SIZE
	docker.io/kindest/kindnetd                    v20210326-1e038dc5                                 f37b7c809e5dc       54.8MB
	docker.io/kubernetesui/dashboard              v2.1.0                                             85e6c0cff043f       66.6MB
	docker.io/kubernetesui/metrics-scraper        v1.0.4                                             a262dd7495d90       14.9MB
	docker.io/library/busybox                     load-from-file-functional-20210817021007-1554185   19d689bc58fd6       1.6MB
	docker.io/library/minikube-local-cache-test   functional-20210817021007-1554185                  8a6dbaf7a758c       1.75kB
	gcr.io/k8s-minikube/storage-provisioner       v5                                                 ba04bb24b9575       8.03MB
	k8s.gcr.io/coredns/coredns                    v1.8.0                                             1a1f05a2cd7c2       11.6MB
	k8s.gcr.io/etcd                               3.4.13-0                                           05b738aa1bc63       135MB
	k8s.gcr.io/kube-apiserver                     v1.21.3                                            44a6d50ef170d       27.7MB
	k8s.gcr.io/kube-controller-manager            v1.21.3                                            cb310ff289d79       26.7MB
	k8s.gcr.io/kube-proxy                         v1.21.3                                            4ea38350a1beb       34.3MB
	k8s.gcr.io/kube-scheduler                     v1.21.3                                            31a3b96cefc1e       13MB
	k8s.gcr.io/pause                              3.1                                                8057e0500773a       262kB
	k8s.gcr.io/pause                              3.3                                                3d18732f8686c       249kB
	k8s.gcr.io/pause                              3.4.1                                              d055819ed991a       253kB
	k8s.gcr.io/pause                              latest                                             8cb2091f603e7       71.3kB

                                                
                                                
-- /stdout --
functional_test.go:423: listing images: exit status 1

                                                
                                                
-- stdout --
	FATA[0000] no such image "localhost/my-image:functional-20210817021007-1554185" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/parallel/BuildImage]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20210817021007-1554185
helpers_test.go:236: (dbg) docker inspect functional-20210817021007-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a09a5edabdd8ef6788b4b8320e88aded5dc3f3ea47175f8d3b82e43f8cc581f",
	        "Created": "2021-08-17T02:10:08.384823248Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1577442,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T02:10:08.832682993Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/3a09a5edabdd8ef6788b4b8320e88aded5dc3f3ea47175f8d3b82e43f8cc581f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a09a5edabdd8ef6788b4b8320e88aded5dc3f3ea47175f8d3b82e43f8cc581f/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a09a5edabdd8ef6788b4b8320e88aded5dc3f3ea47175f8d3b82e43f8cc581f/hosts",
	        "LogPath": "/var/lib/docker/containers/3a09a5edabdd8ef6788b4b8320e88aded5dc3f3ea47175f8d3b82e43f8cc581f/3a09a5edabdd8ef6788b4b8320e88aded5dc3f3ea47175f8d3b82e43f8cc581f-json.log",
	        "Name": "/functional-20210817021007-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20210817021007-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20210817021007-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/db26eeba6313c2f4c4b298a3f1f489af326b947a7b63e73db5690af276f7b2ed-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db26eeba6313c2f4c4b298a3f1f489af326b947a7b63e73db5690af276f7b2ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db26eeba6313c2f4c4b298a3f1f489af326b947a7b63e73db5690af276f7b2ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db26eeba6313c2f4c4b298a3f1f489af326b947a7b63e73db5690af276f7b2ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20210817021007-1554185",
	                "Source": "/var/lib/docker/volumes/functional-20210817021007-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20210817021007-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20210817021007-1554185",
	                "name.minikube.sigs.k8s.io": "functional-20210817021007-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "80a595a224f313e0fb37fad778fd6386bd0bd60d5a060ad8f96d4d828e4a03f0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50324"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50323"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50320"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50322"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50321"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/80a595a224f3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20210817021007-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3a09a5edabdd",
	                        "functional-20210817021007-1554185"
	                    ],
	                    "NetworkID": "166d57bccf5218156918b5f2c2dbeef588244fee8dde040bbdcb35c4f9031abc",
	                    "EndpointID": "cb8431a21bd502ec342dd4f59e1df650f0f626d000d09c2ae9b2e96fadf8b9fa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-20210817021007-1554185 -n functional-20210817021007-1554185

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
helpers_test.go:245: <<< TestFunctional/parallel/BuildImage FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/parallel/BuildImage]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 logs -n 25

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p functional-20210817021007-1554185 logs -n 25: (1.601929207s)
helpers_test.go:253: TestFunctional/parallel/BuildImage logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                                   Args                                   |              Profile              |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------------------------------------|-----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| cache   | list                                                                     | minikube                          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:12:32 UTC | Tue, 17 Aug 2021 02:12:32 UTC |
	| -p      | functional-20210817021007-1554185                                        | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:12:32 UTC | Tue, 17 Aug 2021 02:12:32 UTC |
	|         | ssh sudo crictl images                                                   |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                        | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:12:32 UTC | Tue, 17 Aug 2021 02:12:32 UTC |
	|         | ssh sudo crictl rmi                                                      |                                   |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                  |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                        | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:12:33 UTC | Tue, 17 Aug 2021 02:12:34 UTC |
	|         | cache reload                                                             |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                        | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:12:34 UTC | Tue, 17 Aug 2021 02:12:35 UTC |
	|         | ssh sudo crictl inspecti                                                 |                                   |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                  |                                   |         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.1                                              | minikube                          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:12:35 UTC | Tue, 17 Aug 2021 02:12:35 UTC |
	| cache   | delete k8s.gcr.io/pause:latest                                           | minikube                          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:12:35 UTC | Tue, 17 Aug 2021 02:12:35 UTC |
	| -p      | functional-20210817021007-1554185                                        | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:12:35 UTC | Tue, 17 Aug 2021 02:12:35 UTC |
	|         | kubectl -- --context                                                     |                                   |         |         |                               |                               |
	|         | functional-20210817021007-1554185                                        |                                   |         |         |                               |                               |
	|         | get pods                                                                 |                                   |         |         |                               |                               |
	| kubectl | --profile=functional-20210817021007-1554185                              | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:12:35 UTC | Tue, 17 Aug 2021 02:12:35 UTC |
	|         | -- --context                                                             |                                   |         |         |                               |                               |
	|         | functional-20210817021007-1554185 get pods                               |                                   |         |         |                               |                               |
	| start   | -p functional-20210817021007-1554185                                     | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:12:35 UTC | Tue, 17 Aug 2021 02:13:22 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                                   |         |         |                               |                               |
	|         | --wait=all                                                               |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                        | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:22 UTC | Tue, 17 Aug 2021 02:13:23 UTC |
	|         | logs                                                                     |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185 logs --file                            | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:23 UTC | Tue, 17 Aug 2021 02:13:24 UTC |
	|         | /tmp/functional-20210817021007-1554185016174317/logs.txt                 |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                        | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:25 UTC | Tue, 17 Aug 2021 02:13:25 UTC |
	|         | config unset cpus                                                        |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                        | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:25 UTC | Tue, 17 Aug 2021 02:13:25 UTC |
	|         | config set cpus 2                                                        |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                        | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:25 UTC | Tue, 17 Aug 2021 02:13:25 UTC |
	|         | ssh sudo cat                                                             |                                   |         |         |                               |                               |
	|         | /etc/test/nested/copy/1554185/hosts                                      |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                        | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:25 UTC | Tue, 17 Aug 2021 02:13:25 UTC |
	|         | config get cpus                                                          |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                        | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:25 UTC | Tue, 17 Aug 2021 02:13:25 UTC |
	|         | config unset cpus                                                        |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185                                        | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:25 UTC | Tue, 17 Aug 2021 02:13:25 UTC |
	|         | image ls                                                                 |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185 image load                             | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:26 UTC | Tue, 17 Aug 2021 02:13:26 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_arm64/busybox.tar        |                                   |         |         |                               |                               |
	| ssh     | -p                                                                       | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:26 UTC | Tue, 17 Aug 2021 02:13:27 UTC |
	|         | functional-20210817021007-1554185                                        |                                   |         |         |                               |                               |
	|         | -- sudo crictl images                                                    |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185 image build -t                         | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:25 UTC | Tue, 17 Aug 2021 02:13:28 UTC |
	|         | localhost/my-image:functional-20210817021007-1554185                     |                                   |         |         |                               |                               |
	|         | testdata/build                                                           |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185 image load                             | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:27 UTC | Tue, 17 Aug 2021 02:13:28 UTC |
	|         | docker.io/library/busybox:remove-functional-20210817021007-1554185       |                                   |         |         |                               |                               |
	| -p      | functional-20210817021007-1554185 image rm                               | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:28 UTC | Tue, 17 Aug 2021 02:13:28 UTC |
	|         | docker.io/library/busybox:remove-functional-20210817021007-1554185       |                                   |         |         |                               |                               |
	| ssh     | -p                                                                       | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:28 UTC | Tue, 17 Aug 2021 02:13:28 UTC |
	|         | functional-20210817021007-1554185                                        |                                   |         |         |                               |                               |
	|         | -- sudo crictl images                                                    |                                   |         |         |                               |                               |
	| ssh     | -p                                                                       | functional-20210817021007-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:13:28 UTC | Tue, 17 Aug 2021 02:13:29 UTC |
	|         | functional-20210817021007-1554185                                        |                                   |         |         |                               |                               |
	|         | -- sudo crictl images                                                    |                                   |         |         |                               |                               |
	|---------|--------------------------------------------------------------------------|-----------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 02:12:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 02:12:35.909280 1581051 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:12:35.909413 1581051 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:12:35.909417 1581051 out.go:311] Setting ErrFile to fd 2...
	I0817 02:12:35.909419 1581051 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:12:35.909548 1581051 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:12:35.909778 1581051 out.go:305] Setting JSON to false
	I0817 02:12:35.910670 1581051 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":35694,"bootTime":1629130662,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 02:12:35.910739 1581051 start.go:121] virtualization:  
	I0817 02:12:35.913122 1581051 out.go:177] * [functional-20210817021007-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 02:12:35.915776 1581051 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 02:12:35.914109 1581051 notify.go:169] Checking for updates...
	I0817 02:12:35.917423 1581051 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:12:35.919141 1581051 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 02:12:35.920764 1581051 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 02:12:35.921191 1581051 config.go:177] Loaded profile config "functional-20210817021007-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:12:35.921222 1581051 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 02:12:35.962249 1581051 docker.go:132] docker version: linux-20.10.8
	I0817 02:12:35.962323 1581051 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:12:36.058480 1581051 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 02:12:36.002195045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:12:36.058571 1581051 docker.go:244] overlay module found
	I0817 02:12:36.060511 1581051 out.go:177] * Using the docker driver based on existing profile
	I0817 02:12:36.060531 1581051 start.go:278] selected driver: docker
	I0817 02:12:36.060536 1581051 start.go:751] validating driver "docker" against &{Name:functional-20210817021007-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210817021007-1554185 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:tr
ue storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:12:36.060649 1581051 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 02:12:36.060760 1581051 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:12:36.150669 1581051 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 02:12:36.097591982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:12:36.151096 1581051 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 02:12:36.151113 1581051 cni.go:93] Creating CNI manager for ""
	I0817 02:12:36.151118 1581051 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:12:36.151125 1581051 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 02:12:36.151130 1581051 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 02:12:36.151134 1581051 start_flags.go:277] config:
	{Name:functional-20210817021007-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210817021007-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:tr
ue storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:12:36.153581 1581051 out.go:177] * Starting control plane node functional-20210817021007-1554185 in cluster functional-20210817021007-1554185
	I0817 02:12:36.153616 1581051 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 02:12:36.155802 1581051 out.go:177] * Pulling base image ...
	I0817 02:12:36.155823 1581051 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:12:36.155854 1581051 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 02:12:36.155860 1581051 cache.go:56] Caching tarball of preloaded images
	I0817 02:12:36.155996 1581051 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 02:12:36.156025 1581051 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 02:12:36.156154 1581051 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/config.json ...
	I0817 02:12:36.156363 1581051 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 02:12:36.200726 1581051 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 02:12:36.200738 1581051 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 02:12:36.200754 1581051 cache.go:205] Successfully downloaded all kic artifacts
	I0817 02:12:36.200792 1581051 start.go:313] acquiring machines lock for functional-20210817021007-1554185: {Name:mkbbae4b071b337a918efc1882b1450ea6e84bdc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 02:12:36.200887 1581051 start.go:317] acquired machines lock for "functional-20210817021007-1554185" in 74.182µs
	I0817 02:12:36.200905 1581051 start.go:93] Skipping create...Using existing machine configuration
	I0817 02:12:36.200909 1581051 fix.go:55] fixHost starting: 
	I0817 02:12:36.201185 1581051 cli_runner.go:115] Run: docker container inspect functional-20210817021007-1554185 --format={{.State.Status}}
	I0817 02:12:36.231823 1581051 fix.go:108] recreateIfNeeded on functional-20210817021007-1554185: state=Running err=<nil>
	W0817 02:12:36.231837 1581051 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 02:12:36.233868 1581051 out.go:177] * Updating the running docker "functional-20210817021007-1554185" container ...
	I0817 02:12:36.233891 1581051 machine.go:88] provisioning docker machine ...
	I0817 02:12:36.233904 1581051 ubuntu.go:169] provisioning hostname "functional-20210817021007-1554185"
	I0817 02:12:36.233961 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:36.264819 1581051 main.go:130] libmachine: Using SSH client type: native
	I0817 02:12:36.264986 1581051 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50324 <nil> <nil>}
	I0817 02:12:36.264998 1581051 main.go:130] libmachine: About to run SSH command:
	sudo hostname functional-20210817021007-1554185 && echo "functional-20210817021007-1554185" | sudo tee /etc/hostname
	I0817 02:12:36.389744 1581051 main.go:130] libmachine: SSH cmd err, output: <nil>: functional-20210817021007-1554185
	
	I0817 02:12:36.389820 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:36.422956 1581051 main.go:130] libmachine: Using SSH client type: native
	I0817 02:12:36.423123 1581051 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50324 <nil> <nil>}
	I0817 02:12:36.423145 1581051 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20210817021007-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20210817021007-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20210817021007-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 02:12:36.538050 1581051 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 02:12:36.538065 1581051 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 02:12:36.538083 1581051 ubuntu.go:177] setting up certificates
	I0817 02:12:36.538091 1581051 provision.go:83] configureAuth start
	I0817 02:12:36.538145 1581051 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20210817021007-1554185
	I0817 02:12:36.569390 1581051 provision.go:138] copyHostCerts
	I0817 02:12:36.569436 1581051 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 02:12:36.569442 1581051 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 02:12:36.569490 1581051 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 02:12:36.569565 1581051 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 02:12:36.569571 1581051 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 02:12:36.569588 1581051 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 02:12:36.569632 1581051 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 02:12:36.569636 1581051 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 02:12:36.569650 1581051 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 02:12:36.569690 1581051 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.functional-20210817021007-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20210817021007-1554185]
	I0817 02:12:37.156727 1581051 provision.go:172] copyRemoteCerts
	I0817 02:12:37.156780 1581051 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 02:12:37.156818 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:37.188687 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:12:37.273344 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 02:12:37.288909 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0817 02:12:37.304156 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 02:12:37.319497 1581051 provision.go:86] duration metric: configureAuth took 781.387536ms
	I0817 02:12:37.319510 1581051 ubuntu.go:193] setting minikube options for container-runtime
	I0817 02:12:37.319709 1581051 config.go:177] Loaded profile config "functional-20210817021007-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:12:37.319717 1581051 machine.go:91] provisioned docker machine in 1.085820111s
	I0817 02:12:37.319722 1581051 start.go:267] post-start starting for "functional-20210817021007-1554185" (driver="docker")
	I0817 02:12:37.319727 1581051 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 02:12:37.319780 1581051 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 02:12:37.319814 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:37.354886 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:12:37.441193 1581051 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 02:12:37.443710 1581051 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 02:12:37.443724 1581051 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 02:12:37.443734 1581051 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 02:12:37.443740 1581051 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 02:12:37.443747 1581051 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 02:12:37.443792 1581051 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 02:12:37.443865 1581051 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 02:12:37.443947 1581051 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/test/nested/copy/1554185/hosts -> hosts in /etc/test/nested/copy/1554185
	I0817 02:12:37.443979 1581051 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1554185
	I0817 02:12:37.449868 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:12:37.464996 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/test/nested/copy/1554185/hosts --> /etc/test/nested/copy/1554185/hosts (40 bytes)
	I0817 02:12:37.479849 1581051 start.go:270] post-start completed in 160.118039ms
	I0817 02:12:37.479892 1581051 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 02:12:37.479926 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:37.511901 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:12:37.594992 1581051 fix.go:57] fixHost completed within 1.394076934s
	I0817 02:12:37.595007 1581051 start.go:80] releasing machines lock for "functional-20210817021007-1554185", held for 1.394112528s
	I0817 02:12:37.595096 1581051 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20210817021007-1554185
	I0817 02:12:37.632593 1581051 ssh_runner.go:149] Run: systemctl --version
	I0817 02:12:37.632634 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:37.632864 1581051 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 02:12:37.632912 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:12:37.676811 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:12:37.690496 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:12:37.892619 1581051 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 02:12:37.902530 1581051 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 02:12:37.910908 1581051 docker.go:153] disabling docker service ...
	I0817 02:12:37.910944 1581051 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 02:12:37.919628 1581051 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 02:12:37.929553 1581051 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 02:12:38.028997 1581051 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 02:12:38.129235 1581051 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 02:12:38.139023 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 02:12:38.150529 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 02:12:38.167340 1581051 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 02:12:38.172829 1581051 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 02:12:38.178373 1581051 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 02:12:38.274556 1581051 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 02:12:38.365061 1581051 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 02:12:38.365121 1581051 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 02:12:38.368962 1581051 start.go:413] Will wait 60s for crictl version
	I0817 02:12:38.369020 1581051 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:12:38.411834 1581051 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T02:12:38Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 02:12:49.458800 1581051 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:12:49.480696 1581051 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 02:12:49.480743 1581051 ssh_runner.go:149] Run: containerd --version
	I0817 02:12:49.501093 1581051 ssh_runner.go:149] Run: containerd --version
	I0817 02:12:49.522194 1581051 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 02:12:49.522280 1581051 cli_runner.go:115] Run: docker network inspect functional-20210817021007-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 02:12:49.553292 1581051 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 02:12:49.558320 1581051 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0817 02:12:49.558385 1581051 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:12:49.558439 1581051 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:12:49.581251 1581051 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:12:49.581259 1581051 containerd.go:517] Images already preloaded, skipping extraction
	I0817 02:12:49.581296 1581051 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:12:49.603019 1581051 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:12:49.603028 1581051 cache_images.go:74] Images are preloaded, skipping loading
	I0817 02:12:49.603064 1581051 ssh_runner.go:149] Run: sudo crictl info
	I0817 02:12:49.627885 1581051 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0817 02:12:49.627907 1581051 cni.go:93] Creating CNI manager for ""
	I0817 02:12:49.627915 1581051 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:12:49.627923 1581051 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 02:12:49.627935 1581051 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20210817021007-1554185 NodeName:functional-20210817021007-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:
map[]}
	I0817 02:12:49.628062 1581051 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "functional-20210817021007-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 02:12:49.628176 1581051 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-20210817021007-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:functional-20210817021007-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0817 02:12:49.628224 1581051 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 02:12:49.634168 1581051 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 02:12:49.634205 1581051 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 02:12:49.639828 1581051 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (578 bytes)
	I0817 02:12:49.650894 1581051 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 02:12:49.662258 1581051 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1933 bytes)
	I0817 02:12:49.673199 1581051 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 02:12:49.675846 1581051 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185 for IP: 192.168.49.2
	I0817 02:12:49.675880 1581051 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 02:12:49.675891 1581051 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 02:12:49.675944 1581051 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.key
	I0817 02:12:49.675958 1581051 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/apiserver.key.dd3b5fb2
	I0817 02:12:49.675971 1581051 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/proxy-client.key
	I0817 02:12:49.676072 1581051 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 02:12:49.676104 1581051 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 02:12:49.676112 1581051 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 02:12:49.676133 1581051 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 02:12:49.676153 1581051 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 02:12:49.676172 1581051 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 02:12:49.676212 1581051 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:12:49.677932 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 02:12:49.697926 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 02:12:49.712601 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 02:12:49.727622 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 02:12:49.742153 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 02:12:49.756965 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 02:12:49.771706 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 02:12:49.786157 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 02:12:49.800735 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 02:12:49.815663 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 02:12:49.830168 1581051 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 02:12:49.844530 1581051 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 02:12:49.855418 1581051 ssh_runner.go:149] Run: openssl version
	I0817 02:12:49.859697 1581051 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 02:12:49.866082 1581051 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 02:12:49.868776 1581051 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 02:12:49.868816 1581051 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 02:12:49.873038 1581051 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 02:12:49.878798 1581051 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 02:12:49.884983 1581051 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:12:49.887673 1581051 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:12:49.887713 1581051 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:12:49.891949 1581051 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 02:12:49.897792 1581051 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 02:12:49.907841 1581051 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 02:12:49.910529 1581051 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 02:12:49.910567 1581051 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 02:12:49.914994 1581051 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 02:12:49.920794 1581051 kubeadm.go:390] StartCluster: {Name:functional-20210817021007-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210817021007-1554185 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:12:49.920892 1581051 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 02:12:49.920931 1581051 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:12:49.943709 1581051 cri.go:76] found id: "dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9"
	I0817 02:12:49.943719 1581051 cri.go:76] found id: "2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1"
	I0817 02:12:49.943724 1581051 cri.go:76] found id: "f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089"
	I0817 02:12:49.943728 1581051 cri.go:76] found id: "50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1"
	I0817 02:12:49.943731 1581051 cri.go:76] found id: "c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231"
	I0817 02:12:49.943735 1581051 cri.go:76] found id: "22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d"
	I0817 02:12:49.943739 1581051 cri.go:76] found id: "2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48"
	I0817 02:12:49.943742 1581051 cri.go:76] found id: "f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6"
	I0817 02:12:49.943746 1581051 cri.go:76] found id: ""
	I0817 02:12:49.943780 1581051 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:12:49.979547 1581051 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537","pid":1879,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537/rootfs","created":"2021-08-17T02:12:05.940254331Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_f757e77c-ddf8-4d74-8754-424b9f0da712"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48","pid":1082,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2003791d6509e9ecb04e813cff2a13208974
2b896aea918cb0421a664f7f1f48","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48/rootfs","created":"2021-08-17T02:10:53.140506409Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d","pid":1080,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d/rootfs","created":"2021-08-17T02:10:53.141708982Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-
id":"af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1","pid":1968,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1/rootfs","created":"2021-08-17T02:12:06.069447788Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d","pid":1511,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d","rootfs":
"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d/rootfs","created":"2021-08-17T02:11:16.691380186Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-j6pl5_05b56857-a0f7-456b-a198-2eadf300625f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1","pid":1569,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1/rootfs","created":"2021-08-17T02:11:16.810715576Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernet
es.cri.sandbox-id":"a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2","pid":917,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2/rootfs","created":"2021-08-17T02:10:52.924982147Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-20210817021007-1554185_0ca1ece4f336742d796dd3951c235ff2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46","pid":965,"status":"running","bundle":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46/rootfs","created":"2021-08-17T02:10:52.976256055Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-20210817021007-1554185_af2969cdb2ca0145027b6cf2e1da9f5d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f","pid":1501,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f/rootfs","created":"2021-08-
17T02:11:16.686493064Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-5crrc_ff33274b-c870-4110-af24-a4056969c55a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799","pid":953,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799/rootfs","created":"2021-08-17T02:10:52.955916107Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-2021081702100
7-1554185_cfd18c863a995943023d977afa17770a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231","pid":1149,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231/rootfs","created":"2021-08-17T02:10:53.273710526Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d60"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021","pid":1929,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021","rootfs":"/run/containerd/io.containerd.runtime.v2.
task/k8s.io/d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021/rootfs","created":"2021-08-17T02:12:05.994010101Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-hwn2j_f70985d7-d2b7-408b-9d54-8d6c0b83ab1b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d60","pid":1014,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d60","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d60/rootfs","created":"2021-08-17T02:10:53.04222091Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d
60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-20210817021007-1554185_81e4d679ba718c5a1e1a22193ffc109a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9","pid":1991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9/rootfs","created":"2021-08-17T02:12:06.092583573Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6","pid":1051,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45d89600a90
429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6/rootfs","created":"2021-08-17T02:10:53.085902081Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089","pid":1576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089/rootfs","created":"2021-08-17T02:11:16.908297558Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.
cri.sandbox-id":"3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d"},"owner":"root"}]
	I0817 02:12:49.979739 1581051 cri.go:113] list returned 16 containers
	I0817 02:12:49.979747 1581051 cri.go:116] container: {ID:097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537 Status:running}
	I0817 02:12:49.979756 1581051 cri.go:118] skipping 097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537 - not in ps
	I0817 02:12:49.979760 1581051 cri.go:116] container: {ID:2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48 Status:running}
	I0817 02:12:49.979774 1581051 cri.go:122] skipping {2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48 running}: state = "running", want "paused"
	I0817 02:12:49.979783 1581051 cri.go:116] container: {ID:22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d Status:running}
	I0817 02:12:49.979788 1581051 cri.go:122] skipping {22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d running}: state = "running", want "paused"
	I0817 02:12:49.979793 1581051 cri.go:116] container: {ID:2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1 Status:running}
	I0817 02:12:49.979800 1581051 cri.go:122] skipping {2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1 running}: state = "running", want "paused"
	I0817 02:12:49.979805 1581051 cri.go:116] container: {ID:3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d Status:running}
	I0817 02:12:49.979810 1581051 cri.go:118] skipping 3c1724c3b0db2e27d3a673bc83b4a027e31045be77f2bb68ee0871686343c02d - not in ps
	I0817 02:12:49.979813 1581051 cri.go:116] container: {ID:50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1 Status:running}
	I0817 02:12:49.979818 1581051 cri.go:122] skipping {50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1 running}: state = "running", want "paused"
	I0817 02:12:49.979823 1581051 cri.go:116] container: {ID:9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2 Status:running}
	I0817 02:12:49.979828 1581051 cri.go:118] skipping 9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2 - not in ps
	I0817 02:12:49.979831 1581051 cri.go:116] container: {ID:9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46 Status:running}
	I0817 02:12:49.979836 1581051 cri.go:118] skipping 9f018263acf74e91735e95076160398a8201635439f5bd51d5a890bb43380a46 - not in ps
	I0817 02:12:49.979839 1581051 cri.go:116] container: {ID:a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f Status:running}
	I0817 02:12:49.979844 1581051 cri.go:118] skipping a18545e5e3a38c5a94b1de80f2b9ffa84b0b645fe443cdc1eff96fd68f19667f - not in ps
	I0817 02:12:49.979847 1581051 cri.go:116] container: {ID:af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799 Status:running}
	I0817 02:12:49.979852 1581051 cri.go:118] skipping af2ad6c437351de4da6d43784860450a7249f571b75fb08d2599505e5c8b1799 - not in ps
	I0817 02:12:49.979855 1581051 cri.go:116] container: {ID:c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231 Status:running}
	I0817 02:12:49.979860 1581051 cri.go:122] skipping {c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231 running}: state = "running", want "paused"
	I0817 02:12:49.979864 1581051 cri.go:116] container: {ID:d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021 Status:running}
	I0817 02:12:49.979869 1581051 cri.go:118] skipping d307bb881660d2456d7e422a8fbdffbc4968823bc0b229207ac45d51020d4021 - not in ps
	I0817 02:12:49.979874 1581051 cri.go:116] container: {ID:d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d60 Status:running}
	I0817 02:12:49.979879 1581051 cri.go:118] skipping d6d60a35ea281cf10df09b2a448c0517bb2087bef635dcf367e9c03b4b719d60 - not in ps
	I0817 02:12:49.979882 1581051 cri.go:116] container: {ID:dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9 Status:running}
	I0817 02:12:49.979887 1581051 cri.go:122] skipping {dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9 running}: state = "running", want "paused"
	I0817 02:12:49.979891 1581051 cri.go:116] container: {ID:f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6 Status:running}
	I0817 02:12:49.979896 1581051 cri.go:122] skipping {f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6 running}: state = "running", want "paused"
	I0817 02:12:49.979900 1581051 cri.go:116] container: {ID:f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089 Status:running}
	I0817 02:12:49.979905 1581051 cri.go:122] skipping {f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089 running}: state = "running", want "paused"
	I0817 02:12:49.979943 1581051 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 02:12:49.985942 1581051 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 02:12:49.985950 1581051 kubeadm.go:600] restartCluster start
	I0817 02:12:49.985987 1581051 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 02:12:49.991392 1581051 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:12:49.992237 1581051 kubeconfig.go:93] found "functional-20210817021007-1554185" server: "https://192.168.49.2:8441"
	I0817 02:12:49.994163 1581051 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 02:12:49.999939 1581051 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-08-17 02:10:36.950558563 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-08-17 02:12:49.669231826 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0817 02:12:49.999949 1581051 kubeadm.go:1032] stopping kube-system containers ...
	I0817 02:12:49.999957 1581051 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 02:12:49.999994 1581051 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:12:50.022390 1581051 cri.go:76] found id: "dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9"
	I0817 02:12:50.022402 1581051 cri.go:76] found id: "2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1"
	I0817 02:12:50.022407 1581051 cri.go:76] found id: "f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089"
	I0817 02:12:50.022411 1581051 cri.go:76] found id: "50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1"
	I0817 02:12:50.022415 1581051 cri.go:76] found id: "c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231"
	I0817 02:12:50.022420 1581051 cri.go:76] found id: "22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d"
	I0817 02:12:50.022423 1581051 cri.go:76] found id: "2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48"
	I0817 02:12:50.022427 1581051 cri.go:76] found id: "f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6"
	I0817 02:12:50.022431 1581051 cri.go:76] found id: ""
	I0817 02:12:50.022435 1581051 cri.go:221] Stopping containers: [dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9 2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1 f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089 50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1 c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231 22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d 2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48 f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6]
	I0817 02:12:50.022471 1581051 ssh_runner.go:149] Run: which crictl
	I0817 02:12:50.024996 1581051 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9 2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1 f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089 50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1 c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231 22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d 2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48 f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6
	I0817 02:12:50.399512 1581051 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 02:12:50.465869 1581051 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 02:12:50.472097 1581051 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 17 02:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 17 02:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2071 Aug 17 02:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 17 02:10 /etc/kubernetes/scheduler.conf
	
	I0817 02:12:50.472133 1581051 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0817 02:12:50.478010 1581051 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0817 02:12:50.484019 1581051 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0817 02:12:50.489560 1581051 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:12:50.489592 1581051 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 02:12:50.495736 1581051 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0817 02:12:50.501202 1581051 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:12:50.501236 1581051 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 02:12:50.506576 1581051 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 02:12:50.512296 1581051 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 02:12:50.512304 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:12:50.574426 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:12:53.312237 1581051 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.737792739s)
	I0817 02:12:53.312253 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:12:53.483180 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:12:53.590437 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:12:53.656853 1581051 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:12:53.656902 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:54.168420 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:54.668486 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:55.168561 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:55.668575 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:56.168151 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:56.667899 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:57.168127 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:57.668107 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:58.168117 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:58.668124 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:59.168110 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:12:59.667945 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:00.168635 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:00.667887 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:01.167918 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:01.667971 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:02.168125 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:02.667896 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:03.167890 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:03.191872 1581051 api_server.go:70] duration metric: took 9.535019578s to wait for apiserver process to appear ...
	I0817 02:13:03.191883 1581051 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:13:03.191891 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:08.192142 1581051 api_server.go:255] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 02:13:08.692786 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:09.649400 1581051 api_server.go:265] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 02:13:09.649413 1581051 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 02:13:09.692585 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:09.747515 1581051 api_server.go:265] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 02:13:09.747528 1581051 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 02:13:10.193053 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:10.201061 1581051 api_server.go:265] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 02:13:10.201072 1581051 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 02:13:10.692292 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:10.700691 1581051 api_server.go:265] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 02:13:10.700705 1581051 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 02:13:11.192313 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:11.200697 1581051 api_server.go:265] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0817 02:13:11.213450 1581051 api_server.go:139] control plane version: v1.21.3
	I0817 02:13:11.213461 1581051 api_server.go:129] duration metric: took 8.02157225s to wait for apiserver health ...
	I0817 02:13:11.213468 1581051 cni.go:93] Creating CNI manager for ""
	I0817 02:13:11.213478 1581051 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:13:11.215323 1581051 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:13:11.215392 1581051 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:13:11.218521 1581051 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 02:13:11.218528 1581051 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:13:11.238945 1581051 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:13:11.504341 1581051 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:13:11.514704 1581051 system_pods.go:59] 8 kube-system pods found
	I0817 02:13:11.514721 1581051 system_pods.go:61] "coredns-558bd4d5db-hwn2j" [f70985d7-d2b7-408b-9d54-8d6c0b83ab1b] Running
	I0817 02:13:11.514726 1581051 system_pods.go:61] "etcd-functional-20210817021007-1554185" [c22ede17-13df-4f29-9c3a-172efe4e4b09] Running
	I0817 02:13:11.514729 1581051 system_pods.go:61] "kindnet-j6pl5" [05b56857-a0f7-456b-a198-2eadf300625f] Running
	I0817 02:13:11.514734 1581051 system_pods.go:61] "kube-apiserver-functional-20210817021007-1554185" [f85b64df-d133-4ace-8fdf-6f4282916df8] Pending
	I0817 02:13:11.514738 1581051 system_pods.go:61] "kube-controller-manager-functional-20210817021007-1554185" [14b3db86-2bc5-4092-8874-61a165b71e45] Running
	I0817 02:13:11.514745 1581051 system_pods.go:61] "kube-proxy-5crrc" [ff33274b-c870-4110-af24-a4056969c55a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 02:13:11.514751 1581051 system_pods.go:61] "kube-scheduler-functional-20210817021007-1554185" [c8ef8d45-0d8a-475e-96c6-737097845dcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 02:13:11.514757 1581051 system_pods.go:61] "storage-provisioner" [f757e77c-ddf8-4d74-8754-424b9f0da712] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 02:13:11.514763 1581051 system_pods.go:74] duration metric: took 10.412893ms to wait for pod list to return data ...
	I0817 02:13:11.514769 1581051 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:13:11.517921 1581051 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:13:11.517934 1581051 node_conditions.go:123] node cpu capacity is 2
	I0817 02:13:11.517944 1581051 node_conditions.go:105] duration metric: took 3.171694ms to run NodePressure ...
	I0817 02:13:11.517956 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:13:11.851566 1581051 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0817 02:13:11.855266 1581051 kubeadm.go:746] kubelet initialised
	I0817 02:13:11.855273 1581051 kubeadm.go:747] duration metric: took 3.696136ms waiting for restarted kubelet to initialise ...
	I0817 02:13:11.855278 1581051 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:13:11.863764 1581051 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-hwn2j" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:11.883989 1581051 pod_ready.go:92] pod "coredns-558bd4d5db-hwn2j" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:11.883997 1581051 pod_ready.go:81] duration metric: took 20.221238ms waiting for pod "coredns-558bd4d5db-hwn2j" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:11.884012 1581051 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:11.888258 1581051 pod_ready.go:92] pod "etcd-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:11.888264 1581051 pod_ready.go:81] duration metric: took 4.246111ms waiting for pod "etcd-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:11.888273 1581051 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:13.897326 1581051 pod_ready.go:102] pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"False"
	I0817 02:13:16.397084 1581051 pod_ready.go:102] pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"False"
	I0817 02:13:18.397545 1581051 pod_ready.go:92] pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:18.397565 1581051 pod_ready.go:81] duration metric: took 6.509284337s waiting for pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:18.397575 1581051 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.406280 1581051 pod_ready.go:92] pod "kube-controller-manager-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:19.406300 1581051 pod_ready.go:81] duration metric: took 1.008717521s waiting for pod "kube-controller-manager-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.406309 1581051 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5crrc" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.410150 1581051 pod_ready.go:92] pod "kube-proxy-5crrc" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:19.410156 1581051 pod_ready.go:81] duration metric: took 3.841381ms waiting for pod "kube-proxy-5crrc" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.410163 1581051 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.413776 1581051 pod_ready.go:92] pod "kube-scheduler-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:19.413782 1581051 pod_ready.go:81] duration metric: took 3.613067ms waiting for pod "kube-scheduler-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.413791 1581051 pod_ready.go:38] duration metric: took 7.558504477s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:13:19.413805 1581051 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:13:19.423851 1581051 ops.go:34] apiserver oom_adj: -16
	I0817 02:13:19.423859 1581051 kubeadm.go:604] restartCluster took 29.437904549s
	I0817 02:13:19.423864 1581051 kubeadm.go:392] StartCluster complete in 29.50308089s
	I0817 02:13:19.423877 1581051 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:13:19.423959 1581051 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:13:19.424616 1581051 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:13:19.428572 1581051 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20210817021007-1554185" rescaled to 1
	I0817 02:13:19.428601 1581051 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 02:13:19.430756 1581051 out.go:177] * Verifying Kubernetes components...
	I0817 02:13:19.430807 1581051 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:13:19.428702 1581051 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:13:19.428904 1581051 config.go:177] Loaded profile config "functional-20210817021007-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:13:19.428916 1581051 addons.go:342] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0817 02:13:19.430917 1581051 addons.go:59] Setting storage-provisioner=true in profile "functional-20210817021007-1554185"
	I0817 02:13:19.430929 1581051 addons.go:135] Setting addon storage-provisioner=true in "functional-20210817021007-1554185"
	W0817 02:13:19.430933 1581051 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:13:19.430952 1581051 host.go:66] Checking if "functional-20210817021007-1554185" exists ...
	I0817 02:13:19.430968 1581051 addons.go:59] Setting default-storageclass=true in profile "functional-20210817021007-1554185"
	I0817 02:13:19.430981 1581051 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20210817021007-1554185"
	I0817 02:13:19.431250 1581051 cli_runner.go:115] Run: docker container inspect functional-20210817021007-1554185 --format={{.State.Status}}
	I0817 02:13:19.431410 1581051 cli_runner.go:115] Run: docker container inspect functional-20210817021007-1554185 --format={{.State.Status}}
	I0817 02:13:19.454735 1581051 node_ready.go:35] waiting up to 6m0s for node "functional-20210817021007-1554185" to be "Ready" ...
	I0817 02:13:19.462941 1581051 node_ready.go:49] node "functional-20210817021007-1554185" has status "Ready":"True"
	I0817 02:13:19.462947 1581051 node_ready.go:38] duration metric: took 8.196554ms waiting for node "functional-20210817021007-1554185" to be "Ready" ...
	I0817 02:13:19.462954 1581051 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:13:19.468049 1581051 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-hwn2j" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.500071 1581051 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:13:19.500160 1581051 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:13:19.500167 1581051 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:13:19.500213 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:13:19.501777 1581051 addons.go:135] Setting addon default-storageclass=true in "functional-20210817021007-1554185"
	W0817 02:13:19.501786 1581051 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:13:19.501809 1581051 host.go:66] Checking if "functional-20210817021007-1554185" exists ...
	I0817 02:13:19.502243 1581051 cli_runner.go:115] Run: docker container inspect functional-20210817021007-1554185 --format={{.State.Status}}
	I0817 02:13:19.551738 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:13:19.583914 1581051 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:13:19.583924 1581051 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:13:19.583971 1581051 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
	I0817 02:13:19.607059 1581051 pod_ready.go:92] pod "coredns-558bd4d5db-hwn2j" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:19.607067 1581051 pod_ready.go:81] duration metric: took 139.006758ms waiting for pod "coredns-558bd4d5db-hwn2j" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.607076 1581051 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:19.633163 1581051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
	I0817 02:13:19.664126 1581051 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 02:13:19.688471 1581051 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:13:19.760696 1581051 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:13:20.004474 1581051 pod_ready.go:92] pod "etcd-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:20.004483 1581051 pod_ready.go:81] duration metric: took 397.400219ms waiting for pod "etcd-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:20.004496 1581051 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:20.074050 1581051 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 02:13:20.074087 1581051 addons.go:344] enableAddons completed in 645.17316ms
	I0817 02:13:20.396338 1581051 pod_ready.go:92] pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:20.396346 1581051 pod_ready.go:81] duration metric: took 391.843212ms waiting for pod "kube-apiserver-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:20.396356 1581051 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:20.795722 1581051 pod_ready.go:92] pod "kube-controller-manager-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:20.795730 1581051 pod_ready.go:81] duration metric: took 399.367117ms waiting for pod "kube-controller-manager-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:20.795740 1581051 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5crrc" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:21.195905 1581051 pod_ready.go:92] pod "kube-proxy-5crrc" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:21.195912 1581051 pod_ready.go:81] duration metric: took 400.166288ms waiting for pod "kube-proxy-5crrc" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:21.195920 1581051 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:21.596206 1581051 pod_ready.go:92] pod "kube-scheduler-functional-20210817021007-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:13:21.596215 1581051 pod_ready.go:81] duration metric: took 400.287338ms waiting for pod "kube-scheduler-functional-20210817021007-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:13:21.596224 1581051 pod_ready.go:38] duration metric: took 2.133261068s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:13:21.596237 1581051 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:13:21.596280 1581051 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:13:21.609169 1581051 api_server.go:70] duration metric: took 2.180548488s to wait for apiserver process to appear ...
	I0817 02:13:21.609178 1581051 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:13:21.609186 1581051 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0817 02:13:21.617657 1581051 api_server.go:265] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0817 02:13:21.618433 1581051 api_server.go:139] control plane version: v1.21.3
	I0817 02:13:21.618441 1581051 api_server.go:129] duration metric: took 9.259181ms to wait for apiserver health ...
	I0817 02:13:21.618447 1581051 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:13:21.799161 1581051 system_pods.go:59] 8 kube-system pods found
	I0817 02:13:21.799173 1581051 system_pods.go:61] "coredns-558bd4d5db-hwn2j" [f70985d7-d2b7-408b-9d54-8d6c0b83ab1b] Running
	I0817 02:13:21.799177 1581051 system_pods.go:61] "etcd-functional-20210817021007-1554185" [c22ede17-13df-4f29-9c3a-172efe4e4b09] Running
	I0817 02:13:21.799181 1581051 system_pods.go:61] "kindnet-j6pl5" [05b56857-a0f7-456b-a198-2eadf300625f] Running
	I0817 02:13:21.799186 1581051 system_pods.go:61] "kube-apiserver-functional-20210817021007-1554185" [f85b64df-d133-4ace-8fdf-6f4282916df8] Running
	I0817 02:13:21.799190 1581051 system_pods.go:61] "kube-controller-manager-functional-20210817021007-1554185" [14b3db86-2bc5-4092-8874-61a165b71e45] Running
	I0817 02:13:21.799194 1581051 system_pods.go:61] "kube-proxy-5crrc" [ff33274b-c870-4110-af24-a4056969c55a] Running
	I0817 02:13:21.799198 1581051 system_pods.go:61] "kube-scheduler-functional-20210817021007-1554185" [c8ef8d45-0d8a-475e-96c6-737097845dcb] Running
	I0817 02:13:21.799202 1581051 system_pods.go:61] "storage-provisioner" [f757e77c-ddf8-4d74-8754-424b9f0da712] Running
	I0817 02:13:21.799206 1581051 system_pods.go:74] duration metric: took 180.754796ms to wait for pod list to return data ...
	I0817 02:13:21.799211 1581051 default_sa.go:34] waiting for default service account to be created ...
	I0817 02:13:21.998510 1581051 default_sa.go:45] found service account: "default"
	I0817 02:13:21.998525 1581051 default_sa.go:55] duration metric: took 199.309737ms for default service account to be created ...
	I0817 02:13:21.998531 1581051 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 02:13:22.199290 1581051 system_pods.go:86] 8 kube-system pods found
	I0817 02:13:22.199304 1581051 system_pods.go:89] "coredns-558bd4d5db-hwn2j" [f70985d7-d2b7-408b-9d54-8d6c0b83ab1b] Running
	I0817 02:13:22.199310 1581051 system_pods.go:89] "etcd-functional-20210817021007-1554185" [c22ede17-13df-4f29-9c3a-172efe4e4b09] Running
	I0817 02:13:22.199314 1581051 system_pods.go:89] "kindnet-j6pl5" [05b56857-a0f7-456b-a198-2eadf300625f] Running
	I0817 02:13:22.199319 1581051 system_pods.go:89] "kube-apiserver-functional-20210817021007-1554185" [f85b64df-d133-4ace-8fdf-6f4282916df8] Running
	I0817 02:13:22.199326 1581051 system_pods.go:89] "kube-controller-manager-functional-20210817021007-1554185" [14b3db86-2bc5-4092-8874-61a165b71e45] Running
	I0817 02:13:22.199330 1581051 system_pods.go:89] "kube-proxy-5crrc" [ff33274b-c870-4110-af24-a4056969c55a] Running
	I0817 02:13:22.199335 1581051 system_pods.go:89] "kube-scheduler-functional-20210817021007-1554185" [c8ef8d45-0d8a-475e-96c6-737097845dcb] Running
	I0817 02:13:22.199338 1581051 system_pods.go:89] "storage-provisioner" [f757e77c-ddf8-4d74-8754-424b9f0da712] Running
	I0817 02:13:22.199343 1581051 system_pods.go:126] duration metric: took 200.809102ms to wait for k8s-apps to be running ...
	I0817 02:13:22.199349 1581051 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 02:13:22.199394 1581051 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:13:22.208388 1581051 system_svc.go:56] duration metric: took 9.035372ms WaitForService to wait for kubelet.
	I0817 02:13:22.208397 1581051 kubeadm.go:547] duration metric: took 2.77978148s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 02:13:22.208415 1581051 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:13:22.396720 1581051 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:13:22.396730 1581051 node_conditions.go:123] node cpu capacity is 2
	I0817 02:13:22.396740 1581051 node_conditions.go:105] duration metric: took 188.320784ms to run NodePressure ...
	I0817 02:13:22.396748 1581051 start.go:231] waiting for startup goroutines ...
	I0817 02:13:22.448536 1581051 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 02:13:22.450801 1581051 out.go:177] * Done! kubectl is now configured to use "functional-20210817021007-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	a134cfa72ef30       ba04bb24b9575       17 seconds ago       Running             storage-provisioner       1                   097a0e56d670f
	12cbd98080c41       4ea38350a1beb       18 seconds ago       Running             kube-proxy                1                   a18545e5e3a38
	2d043a197ec35       1a1f05a2cd7c2       18 seconds ago       Running             coredns                   1                   d307bb881660d
	d07341ce12e32       f37b7c809e5dc       18 seconds ago       Running             kindnet-cni               1                   3c1724c3b0db2
	82907d1c8abec       44a6d50ef170d       27 seconds ago       Running             kube-apiserver            0                   49cabe38d7e82
	a5de4bf70d6b1       cb310ff289d79       27 seconds ago       Running             kube-controller-manager   1                   9f018263acf74
	9957a9c6e7457       05b738aa1bc63       27 seconds ago       Running             etcd                      1                   d6d60a35ea281
	1e6ff32edae34       31a3b96cefc1e       27 seconds ago       Running             kube-scheduler            1                   af2ad6c437351
	dcb3485081788       1a1f05a2cd7c2       About a minute ago   Exited              coredns                   0                   d307bb881660d
	2e48c5b1b49b7       ba04bb24b9575       About a minute ago   Exited              storage-provisioner       0                   097a0e56d670f
	f5ccf8d4a795a       f37b7c809e5dc       2 minutes ago        Exited              kindnet-cni               0                   3c1724c3b0db2
	50882d637a8db       4ea38350a1beb       2 minutes ago        Exited              kube-proxy                0                   a18545e5e3a38
	c4691bd6b5911       05b738aa1bc63       2 minutes ago        Exited              etcd                      0                   d6d60a35ea281
	22d1f79cab38d       31a3b96cefc1e       2 minutes ago        Exited              kube-scheduler            0                   af2ad6c437351
	2003791d6509e       cb310ff289d79       2 minutes ago        Exited              kube-controller-manager   0                   9f018263acf74
	f45d89600a904       44a6d50ef170d       2 minutes ago        Exited              kube-apiserver            0                   9ebc31c97d045
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 02:10:09 UTC, end at Tue 2021-08-17 02:13:30 UTC. --
	Aug 17 02:13:11 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:11.970889716Z" level=info msg="TaskExit event &TaskExit{ContainerID:9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2,ID:9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2,Pid:917,ExitStatus:137,ExitedAt:2021-08-17 02:13:11.970683999 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 02:13:12 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:12.010467018Z" level=info msg="TearDown network for sandbox \"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2\" successfully"
	Aug 17 02:13:12 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:12.010500930Z" level=info msg="StopPodSandbox for \"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2\" returns successfully"
	Aug 17 02:13:12 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:12.010640670Z" level=info msg="shim disconnected" id=9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2
	Aug 17 02:13:12 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:12.010772115Z" level=error msg="copy shim log" error="read /proc/self/fd/26: file already closed"
	Aug 17 02:13:12 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:12.125046968Z" level=info msg="CreateContainer within sandbox \"097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Aug 17 02:13:12 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:12.145661712Z" level=info msg="CreateContainer within sandbox \"097a0e56d670f7809e7fbdc96ed62622b8e9494385cb1774441ebf10a6c22537\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"a134cfa72ef30019193e5918256bcb07ed51a78c56f83e577fe6b8e87008a846\""
	Aug 17 02:13:12 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:12.145963715Z" level=info msg="StartContainer for \"a134cfa72ef30019193e5918256bcb07ed51a78c56f83e577fe6b8e87008a846\""
	Aug 17 02:13:12 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:12.246044011Z" level=info msg="StartContainer for \"a134cfa72ef30019193e5918256bcb07ed51a78c56f83e577fe6b8e87008a846\" returns successfully"
	Aug 17 02:13:13 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:13.947817454Z" level=info msg="StopPodSandbox for \"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2\""
	Aug 17 02:13:13 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:13.947887968Z" level=info msg="Container to stop \"f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 02:13:13 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:13.947957999Z" level=info msg="TearDown network for sandbox \"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2\" successfully"
	Aug 17 02:13:13 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:13.947969305Z" level=info msg="StopPodSandbox for \"9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2\" returns successfully"
	Aug 17 02:13:26 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:26.763607421Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/busybox:load-from-file-functional-20210817021007-1554185,Labels:map[string]string{},XXX_unrecognized:[],}"
	Aug 17 02:13:26 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:26.771938316Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:19d689bc58fd64da6a46d46512ea965a12b6bfb5b030400e21bc0a04c4ff155e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 17 02:13:26 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:26.772228881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/busybox:load-from-file-functional-20210817021007-1554185,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 17 02:13:28 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:28.145774147Z" level=error msg="(*service).Write failed" error="rpc error: code = Canceled desc = context canceled" expected="sha256:90441bfaac70995ed0539fcde9e822a6293a6aac2701899520ac5d249c074414" ref="config-sha256:90441bfaac70995ed0539fcde9e822a6293a6aac2701899520ac5d249c074414" total=1457
	Aug 17 02:13:28 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:28.146243205Z" level=error msg="(*service).Write failed" error="rpc error: code = Canceled desc = context canceled" expected="sha256:68bb6d826ec5a8d3c9511e087a39eef42c804d8e499cbf0e8d8b1a5fa2494e4b" ref="config-sha256:68bb6d826ec5a8d3c9511e087a39eef42c804d8e499cbf0e8d8b1a5fa2494e4b" total=1467
	Aug 17 02:13:28 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:28.389668505Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/busybox:remove-functional-20210817021007-1554185,Labels:map[string]string{},XXX_unrecognized:[],}"
	Aug 17 02:13:28 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:28.394733111Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2a3d4686b4031779b6f4d38149c08487a9e859d91c73aaffbd98caded91b768d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 17 02:13:28 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:28.395134584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/busybox:remove-functional-20210817021007-1554185,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 17 02:13:28 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:28.753492757Z" level=info msg="RemoveImage \"docker.io/library/busybox:remove-functional-20210817021007-1554185\""
	Aug 17 02:13:28 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:28.755581712Z" level=info msg="ImageDelete event &ImageDelete{Name:sha256:2a3d4686b4031779b6f4d38149c08487a9e859d91c73aaffbd98caded91b768d,XXX_unrecognized:[],}"
	Aug 17 02:13:28 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:28.757593860Z" level=info msg="ImageDelete event &ImageDelete{Name:docker.io/library/busybox:remove-functional-20210817021007-1554185,XXX_unrecognized:[],}"
	Aug 17 02:13:28 functional-20210817021007-1554185 containerd[3069]: time="2021-08-17T02:13:28.765359305Z" level=info msg="RemoveImage \"docker.io/library/busybox:remove-functional-20210817021007-1554185\" returns successfully"
	
	* 
	* ==> coredns [2d043a197ec35b03025658e8a189e9a49ac30056e2358b98e5e0c85615fbbde9] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> coredns [dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20210817021007-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-20210817021007-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=functional-20210817021007-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T02_11_02_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 02:10:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20210817021007-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 02:13:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 02:13:09 +0000   Tue, 17 Aug 2021 02:10:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 02:13:09 +0000   Tue, 17 Aug 2021 02:10:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 02:13:09 +0000   Tue, 17 Aug 2021 02:10:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 02:13:09 +0000   Tue, 17 Aug 2021 02:12:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20210817021007-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                7cd3ce4e-d107-428e-9bf7-b8e3aad5da0f
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-hwn2j                                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m14s
	  kube-system                 etcd-functional-20210817021007-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m19s
	  kube-system                 kindnet-j6pl5                                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m14s
	  kube-system                 kube-apiserver-functional-20210817021007-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 kube-controller-manager-functional-20210817021007-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-proxy-5crrc                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 kube-scheduler-functional-20210817021007-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  2m38s (x5 over 2m38s)  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m38s (x4 over 2m38s)  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m38s (x4 over 2m38s)  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m20s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m20s                  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m20s                  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s                  kubelet     Node functional-20210817021007-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 2m14s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                90s                    kubelet     Node functional-20210817021007-1554185 status is now: NodeReady
	  Normal  Starting                 29s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)      kubelet     Node functional-20210817021007-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)      kubelet     Node functional-20210817021007-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 28s)      kubelet     Node functional-20210817021007-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 18s                    kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug17 01:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [9957a9c6e74576996352d46594efafef2b23f6669b100da5c2de39c511314a45] <==
	* raft2021/08/17 02:13:02 INFO: aec36adc501070cc switched to configuration voters=()
	raft2021/08/17 02:13:02 INFO: aec36adc501070cc became follower at term 2
	raft2021/08/17 02:13:02 INFO: newRaft aec36adc501070cc [peers: [], term: 2, commit: 596, applied: 0, lastindex: 596, lastterm: 2]
	2021-08-17 02:13:02.820791 W | auth: simple token is not cryptographically signed
	2021-08-17 02:13:02.834985 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-17 02:13:02.836736 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 02:13:02.836864 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-17 02:13:02.837205 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/17 02:13:02 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 02:13:02.837484 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-17 02:13:02.837547 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 02:13:02.837578 I | etcdserver/api: enabled capabilities for version 3.4
	raft2021/08/17 02:13:04 INFO: aec36adc501070cc is starting a new election at term 2
	raft2021/08/17 02:13:04 INFO: aec36adc501070cc became candidate at term 3
	raft2021/08/17 02:13:04 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3
	raft2021/08/17 02:13:04 INFO: aec36adc501070cc became leader at term 3
	raft2021/08/17 02:13:04 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3
	2021-08-17 02:13:04.023317 I | etcdserver: published {Name:functional-20210817021007-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 02:13:04.023495 I | embed: ready to serve client requests
	2021-08-17 02:13:04.024972 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 02:13:04.030863 I | embed: ready to serve client requests
	2021-08-17 02:13:04.032102 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 02:13:12.669195 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:13:18.913327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:13:28.912570 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [c4691bd6b59110206f316dc7846d97e530a6582132c90149b4b1cfa970fbd231] <==
	* 2021-08-17 02:10:53.363552 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/17 02:10:53 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 02:10:53 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 02:10:53 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 02:10:53 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 02:10:53 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 02:10:53.908924 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 02:10:53.909644 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 02:10:53.909803 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 02:10:53.909893 I | etcdserver: published {Name:functional-20210817021007-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 02:10:53.909977 I | embed: ready to serve client requests
	2021-08-17 02:10:53.911325 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 02:10:53.911642 I | embed: ready to serve client requests
	2021-08-17 02:10:53.914059 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 02:11:12.117266 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:11:15.942038 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:11:25.943037 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:11:35.942114 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:11:45.942457 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:11:55.942536 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:12:05.943054 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:12:15.942442 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:12:25.942138 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:12:35.942262 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:12:45.943039 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  02:13:30 up  9:55,  0 users,  load average: 1.14, 0.89, 1.02
	Linux functional-20210817021007-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [82907d1c8abecfc9bae9ff0d65d732ca0318989aa5efac9548672a44a4571aff] <==
	* I0817 02:13:09.585205       1 establishing_controller.go:76] Starting EstablishingController
	I0817 02:13:09.585218       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0817 02:13:09.585229       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0817 02:13:09.585241       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0817 02:13:09.606855       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0817 02:13:09.606869       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0817 02:13:09.712126       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0817 02:13:09.727378       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 02:13:09.733223       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 02:13:09.734671       1 cache.go:39] Caches are synced for autoregister controller
	I0817 02:13:09.735069       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0817 02:13:09.735193       1 apf_controller.go:299] Running API Priority and Fairness config worker
	I0817 02:13:09.735628       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0817 02:13:09.773382       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0817 02:13:09.779124       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0817 02:13:10.513505       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 02:13:10.513530       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 02:13:10.517201       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 02:13:11.497370       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 02:13:11.733618       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 02:13:11.757226       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 02:13:11.830543       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 02:13:11.839010       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0817 02:13:22.995014       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 02:13:23.001768       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [f45d89600a90429a6e2732136672d9ca4aad4f8bf6ebffa23c9928e23e04f9d6] <==
	* W0817 02:12:50.231104       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231122       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231136       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231154       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231167       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231183       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231213       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231230       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231242       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231261       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231285       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231294       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231315       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231326       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231346       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231353       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231383       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231401       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231416       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231427       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231446       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231457       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.230868       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231477       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 02:12:50.231384       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [2003791d6509e9ecb04e813cff2a132089742b896aea918cb0421a664f7f1f48] <==
	* I0817 02:11:16.141688       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0817 02:11:16.142711       1 event.go:291] "Event occurred" object="functional-20210817021007-1554185" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20210817021007-1554185 event: Registered Node functional-20210817021007-1554185 in Controller"
	I0817 02:11:16.149460       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0817 02:11:16.149639       1 shared_informer.go:247] Caches are synced for endpoint 
	I0817 02:11:16.150864       1 shared_informer.go:247] Caches are synced for PV protection 
	I0817 02:11:16.158931       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0817 02:11:16.170851       1 shared_informer.go:247] Caches are synced for service account 
	W0817 02:11:16.188708       1 endpointslice_controller.go:305] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	I0817 02:11:16.190083       1 event.go:291] "Event occurred" object="kube-system/etcd-functional-20210817021007-1554185" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 02:11:16.190200       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-functional-20210817021007-1554185" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 02:11:16.190419       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-hwn2j"
	I0817 02:11:16.243401       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j6pl5"
	I0817 02:11:16.243581       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5crrc"
	I0817 02:11:16.305759       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0817 02:11:16.319222       1 shared_informer.go:247] Caches are synced for disruption 
	I0817 02:11:16.322837       1 disruption.go:371] Sending events to api server.
	E0817 02:11:16.325730       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"9034d0c5-ad4f-446a-870e-a81158502e06", ResourceVersion:"415", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764763062, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001fda630), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001fda648)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001fda660), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001fda678)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001f8f620), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, Crea
tionTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001fda690), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.Flex
VolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001fda6a8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVo
lumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CS
IVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001fda6c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*
v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001f8f640)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001f8f680)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amou
nt{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropa
gation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001fb5560), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001fccfe8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40005051f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(ni
l), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001fdf7c0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001fcd030)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetConditio
n(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0817 02:11:16.342897       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:11:16.409582       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:11:16.486937       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0817 02:11:16.502939       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-59tz7"
	I0817 02:11:16.766228       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:11:16.849104       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:11:16.849132       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 02:12:01.146582       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-controller-manager [a5de4bf70d6b15f0690c7fa47a8bbe220d6a9421a270ca19ae0415d1cce3e279] <==
	* I0817 02:13:22.959430       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I0817 02:13:22.959546       1 shared_informer.go:247] Caches are synced for attach detach 
	I0817 02:13:22.960509       1 shared_informer.go:247] Caches are synced for endpoint 
	I0817 02:13:22.961483       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0817 02:13:22.962456       1 event.go:291] "Event occurred" object="functional-20210817021007-1554185" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20210817021007-1554185 event: Registered Node functional-20210817021007-1554185 in Controller"
	I0817 02:13:22.964821       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0817 02:13:22.965113       1 shared_informer.go:247] Caches are synced for GC 
	I0817 02:13:22.965709       1 shared_informer.go:247] Caches are synced for disruption 
	I0817 02:13:22.965816       1 disruption.go:371] Sending events to api server.
	I0817 02:13:22.969223       1 shared_informer.go:247] Caches are synced for TTL 
	I0817 02:13:22.976409       1 shared_informer.go:247] Caches are synced for deployment 
	I0817 02:13:23.015704       1 shared_informer.go:247] Caches are synced for expand 
	I0817 02:13:23.026997       1 shared_informer.go:247] Caches are synced for stateful set 
	I0817 02:13:23.039054       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0817 02:13:23.039223       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0817 02:13:23.039359       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0817 02:13:23.039386       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0817 02:13:23.046604       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0817 02:13:23.087368       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:13:23.131120       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0817 02:13:23.164808       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:13:23.265154       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0817 02:13:23.651349       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:13:23.666878       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:13:23.666894       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [12cbd98080c419ded9a5dbf4fc964388effa7fa297ddf5a27d30469abf42a1af] <==
	* I0817 02:13:12.053736       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 02:13:12.053785       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 02:13:12.053805       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 02:13:12.075594       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 02:13:12.075619       1 server_others.go:212] Using iptables Proxier.
	I0817 02:13:12.075628       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 02:13:12.075639       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 02:13:12.076073       1 server.go:643] Version: v1.21.3
	I0817 02:13:12.076689       1 config.go:315] Starting service config controller
	I0817 02:13:12.076707       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 02:13:12.076793       1 config.go:224] Starting endpoint slice config controller
	I0817 02:13:12.076806       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 02:13:12.086115       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 02:13:12.091151       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 02:13:12.177451       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 02:13:12.177457       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1] <==
	* I0817 02:11:16.894705       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 02:11:16.894942       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 02:11:16.895032       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 02:11:16.928213       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 02:11:16.928342       1 server_others.go:212] Using iptables Proxier.
	I0817 02:11:16.928441       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 02:11:16.928521       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 02:11:16.931724       1 server.go:643] Version: v1.21.3
	I0817 02:11:16.932480       1 config.go:315] Starting service config controller
	I0817 02:11:16.932578       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 02:11:16.932686       1 config.go:224] Starting endpoint slice config controller
	I0817 02:11:16.932767       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 02:11:16.937956       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 02:11:16.940369       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 02:11:17.032795       1 shared_informer.go:247] Caches are synced for service config 
	I0817 02:11:17.032856       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [1e6ff32edae34dda2c87416ed842a9831a9bd160752a1210d740274e650d265d] <==
	* I0817 02:13:05.087351       1 serving.go:347] Generated self-signed cert in-memory
	W0817 02:13:09.654538       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0817 02:13:09.654571       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 02:13:09.654580       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 02:13:09.654587       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 02:13:09.758659       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0817 02:13:09.759223       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 02:13:09.770760       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 02:13:09.761731       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0817 02:13:09.871723       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [22d1f79cab38d80eab48e1ee63475d2dc078f9b28ddf3b4b161dd6277654842d] <==
	* W0817 02:10:59.640856       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 02:10:59.640862       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 02:10:59.698489       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0817 02:10:59.698866       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0817 02:10:59.698904       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 02:10:59.713744       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0817 02:10:59.721343       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 02:10:59.721495       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 02:10:59.721621       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:10:59.721827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:10:59.722390       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 02:10:59.722079       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 02:10:59.722152       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:10:59.722218       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:10:59.722284       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:10:59.722334       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:10:59.726916       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 02:10:59.727160       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:10:59.727791       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:10:59.728042       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 02:11:00.548035       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:11:00.548272       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:11:00.689047       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 02:11:00.716644       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 02:11:02.514618       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 02:10:09 UTC, end at Tue 2021-08-17 02:13:30 UTC. --
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.622598    3562 topology_manager.go:187] "Topology Admit Handler"
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.622649    3562 topology_manager.go:187] "Topology Admit Handler"
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.622701    3562 topology_manager.go:187] "Topology Admit Handler"
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.624448    3562 kubelet.go:1666] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-20210817021007-1554185" podUID=72414613-9b92-4d40-b2bd-d671d7c64656
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.636023    3562 kubelet.go:1670] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-20210817021007-1554185"
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.779934    3562 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/05b56857-a0f7-456b-a198-2eadf300625f-cni-cfg\") pod \"kindnet-j6pl5\" (UID: \"05b56857-a0f7-456b-a198-2eadf300625f\") "
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.779983    3562 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwgd\" (UniqueName: \"kubernetes.io/projected/f757e77c-ddf8-4d74-8754-424b9f0da712-kube-api-access-xmwgd\") pod \"storage-provisioner\" (UID: \"f757e77c-ddf8-4d74-8754-424b9f0da712\") "
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.780020    3562 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f757e77c-ddf8-4d74-8754-424b9f0da712-tmp\") pod \"storage-provisioner\" (UID: \"f757e77c-ddf8-4d74-8754-424b9f0da712\") "
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.780048    3562 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05b56857-a0f7-456b-a198-2eadf300625f-xtables-lock\") pod \"kindnet-j6pl5\" (UID: \"05b56857-a0f7-456b-a198-2eadf300625f\") "
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.780073    3562 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff33274b-c870-4110-af24-a4056969c55a-xtables-lock\") pod \"kube-proxy-5crrc\" (UID: \"ff33274b-c870-4110-af24-a4056969c55a\") "
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.780108    3562 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f70985d7-d2b7-408b-9d54-8d6c0b83ab1b-config-volume\") pod \"coredns-558bd4d5db-hwn2j\" (UID: \"f70985d7-d2b7-408b-9d54-8d6c0b83ab1b\") "
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.780134    3562 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfp9z\" (UniqueName: \"kubernetes.io/projected/f70985d7-d2b7-408b-9d54-8d6c0b83ab1b-kube-api-access-gfp9z\") pod \"coredns-558bd4d5db-hwn2j\" (UID: \"f70985d7-d2b7-408b-9d54-8d6c0b83ab1b\") "
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.780159    3562 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff33274b-c870-4110-af24-a4056969c55a-kube-proxy\") pod \"kube-proxy-5crrc\" (UID: \"ff33274b-c870-4110-af24-a4056969c55a\") "
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.780184    3562 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05b56857-a0f7-456b-a198-2eadf300625f-lib-modules\") pod \"kindnet-j6pl5\" (UID: \"05b56857-a0f7-456b-a198-2eadf300625f\") "
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.780210    3562 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff33274b-c870-4110-af24-a4056969c55a-lib-modules\") pod \"kube-proxy-5crrc\" (UID: \"ff33274b-c870-4110-af24-a4056969c55a\") "
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.780237    3562 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8st9w\" (UniqueName: \"kubernetes.io/projected/05b56857-a0f7-456b-a198-2eadf300625f-kube-api-access-8st9w\") pod \"kindnet-j6pl5\" (UID: \"05b56857-a0f7-456b-a198-2eadf300625f\") "
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.780262    3562 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf829\" (UniqueName: \"kubernetes.io/projected/ff33274b-c870-4110-af24-a4056969c55a-kube-api-access-kf829\") pod \"kube-proxy-5crrc\" (UID: \"ff33274b-c870-4110-af24-a4056969c55a\") "
	Aug 17 02:13:10 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:10.780272    3562 reconciler.go:157] "Reconciler: start to sync state"
	Aug 17 02:13:11 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:11.523858    3562 scope.go:111] "RemoveContainer" containerID="f5ccf8d4a795ab810d831127cb56c6131018701190d91c8b1a10f1b28c513089"
	Aug 17 02:13:11 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:11.523947    3562 scope.go:111] "RemoveContainer" containerID="dcb3485081788b3ef88d5c2c86875c9b469805a057d36bd888ad7a7aebedf1a9"
	Aug 17 02:13:11 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:11.823639    3562 scope.go:111] "RemoveContainer" containerID="50882d637a8dbe44f36f4e58a1b8947687693888fdc374c5011a345bc36abca1"
	Aug 17 02:13:12 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:12.033604    3562 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="9ebc31c97d0458487306c68eb1c6d23e9bcd3b6634f4a3aeb0a0cd2c78ea8ce2"
	Aug 17 02:13:12 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:12.123501    3562 scope.go:111] "RemoveContainer" containerID="2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1"
	Aug 17 02:13:13 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:13.038329    3562 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 17 02:13:13 functional-20210817021007-1554185 kubelet[3562]: I0817 02:13:13.948772    3562 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/0ca1ece4f336742d796dd3951c235ff2/volumes"
	
	* 
	* ==> storage-provisioner [2e48c5b1b49b73cabf6cf826f9e1f354d5627a7ed618d4582ac04ecc519af2a1] <==
	* I0817 02:12:06.128117       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 02:12:06.167270       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 02:12:06.167311       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 02:12:06.188392       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 02:12:06.188647       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20210817021007-1554185_912274ee-7890-41c4-9351-07c103e69cda!
	I0817 02:12:06.192793       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cdcc0882-50d6-400a-a02f-a68b7dbdaaf2", APIVersion:"v1", ResourceVersion:"515", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20210817021007-1554185_912274ee-7890-41c4-9351-07c103e69cda became leader
	I0817 02:12:06.289504       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20210817021007-1554185_912274ee-7890-41c4-9351-07c103e69cda!
	
	* 
	* ==> storage-provisioner [a134cfa72ef30019193e5918256bcb07ed51a78c56f83e577fe6b8e87008a846] <==
	* I0817 02:13:12.280450       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 02:13:12.329015       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 02:13:12.329558       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 02:13:29.951233       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 02:13:29.951393       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20210817021007-1554185_c54faadf-34aa-435d-b3d8-b964e33c46aa!
	I0817 02:13:29.952273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cdcc0882-50d6-400a-a02f-a68b7dbdaaf2", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20210817021007-1554185_c54faadf-34aa-435d-b3d8-b964e33c46aa became leader
	I0817 02:13:30.051952       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20210817021007-1554185_c54faadf-34aa-435d-b3d8-b964e33c46aa!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-20210817021007-1554185 -n functional-20210817021007-1554185

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
helpers_test.go:262: (dbg) Run:  kubectl --context functional-20210817021007-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestFunctional/parallel/BuildImage]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context functional-20210817021007-1554185 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context functional-20210817021007-1554185 describe pod : exit status 1 (94.104889ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context functional-20210817021007-1554185 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/BuildImage (5.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (241.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:146: (dbg) Run:  kubectl --context functional-20210817021007-1554185 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:150: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [842360d0-8253-4ab4-b42b-9940bf1090e0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0817 02:13:35.364938 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:150: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService: pod "run=nginx-svc" failed to start within 4m0s: timed out waiting for the condition ****
functional_test_tunnel_test.go:150: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-20210817021007-1554185 -n functional-20210817021007-1554185
functional_test_tunnel_test.go:150: TestFunctional/parallel/TunnelCmd/serial/WaitService: showing logs for failed pods as of 2021-08-17 02:17:35.094069082 +0000 UTC m=+1685.888816883
functional_test_tunnel_test.go:150: (dbg) Run:  kubectl --context functional-20210817021007-1554185 describe po nginx-svc -n default
functional_test_tunnel_test.go:150: (dbg) kubectl --context functional-20210817021007-1554185 describe po nginx-svc -n default:
Name:         nginx-svc
Namespace:    default
Priority:     0
Node:         functional-20210817021007-1554185/192.168.49.2
Start Time:   Tue, 17 Aug 2021 02:13:34 +0000
Labels:       run=nginx-svc
Annotations:  <none>
Status:       Pending
IP:           10.244.0.4
IPs:
IP:  10.244.0.4
Containers:
nginx:
Container ID:   
Image:          nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-whc4s (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-whc4s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  4m                     default-scheduler  Successfully assigned default/nginx-svc to functional-20210817021007-1554185
Warning  Failed     3m17s (x3 over 3m59s)  kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    2m34s (x4 over 4m)     kubelet            Pulling image "nginx:alpine"
Warning  Failed     2m33s (x4 over 3m59s)  kubelet            Error: ErrImagePull
Warning  Failed     2m33s                  kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m18s (x6 over 3m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2m5s (x7 over 3m58s)   kubelet            Back-off pulling image "nginx:alpine"
functional_test_tunnel_test.go:150: (dbg) Run:  kubectl --context functional-20210817021007-1554185 logs nginx-svc -n default
functional_test_tunnel_test.go:150: (dbg) Non-zero exit: kubectl --context functional-20210817021007-1554185 logs nginx-svc -n default: exit status 1 (81.093679ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:150: kubectl --context functional-20210817021007-1554185 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:151: wait: run=nginx-svc within 4m0s: timed out waiting for the condition
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService (241.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (243.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-20210817021007-1554185 /tmp/mounttest087931240:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1629166614689234424" to /tmp/mounttest087931240/created-by-test
functional_test_mount_test.go:110: wrote "test-1629166614689234424" to /tmp/mounttest087931240/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1629166614689234424" to /tmp/mounttest087931240/test-1629166614689234424
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.930069ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 17 02:16 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 17 02:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 17 02:16 test-1629166614689234424
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh cat /mount-9p/test-1629166614689234424
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210817021007-1554185 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [784534a0-6950-41ce-a97f-f550d1eb05bf] Pending
helpers_test.go:343: "busybox-mount" [784534a0-6950-41ce-a97f-f550d1eb05bf] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:156: ***** TestFunctional/parallel/MountCmd/any-port: pod "integration-test=busybox-mount" failed to start within 4m0s: timed out waiting for the condition ****
functional_test_mount_test.go:156: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-20210817021007-1554185 -n functional-20210817021007-1554185
functional_test_mount_test.go:156: TestFunctional/parallel/MountCmd/any-port: showing logs for failed pods as of 2021-08-17 02:20:56.823334189 +0000 UTC m=+1887.618081973
functional_test_mount_test.go:156: (dbg) Run:  kubectl --context functional-20210817021007-1554185 describe po busybox-mount -n default
functional_test_mount_test.go:156: (dbg) kubectl --context functional-20210817021007-1554185 describe po busybox-mount -n default:
Name:         busybox-mount
Namespace:    default
Priority:     0
Node:         functional-20210817021007-1554185/192.168.49.2
Start Time:   Tue, 17 Aug 2021 02:16:56 +0000
Labels:       integration-test=busybox-mount
Annotations:  <none>
Status:       Pending
IP:           10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
mount-munger:
Container ID:  
Image:         busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-886fj (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
test-volume:
Type:          HostPath (bare host directory volume)
Path:          /mount-9p
HostPathType:  
kube-api-access-886fj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  4m                     default-scheduler  Successfully assigned default/busybox-mount to functional-20210817021007-1554185
Warning  Failed     3m18s (x3 over 3m59s)  kubelet            Failed to pull image "busybox:1.28.4-glibc": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.28.4-glibc": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    2m27s (x4 over 4m)     kubelet            Pulling image "busybox:1.28.4-glibc"
Warning  Failed     2m25s (x4 over 3m59s)  kubelet            Error: ErrImagePull
Warning  Failed     2m25s                  kubelet            Failed to pull image "busybox:1.28.4-glibc": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.28.4-glibc": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m12s (x6 over 3m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    118s (x7 over 3m58s)   kubelet            Back-off pulling image "busybox:1.28.4-glibc"
functional_test_mount_test.go:156: (dbg) Run:  kubectl --context functional-20210817021007-1554185 logs busybox-mount -n default
functional_test_mount_test.go:156: (dbg) Non-zero exit: kubectl --context functional-20210817021007-1554185 logs busybox-mount -n default: exit status 1 (109.513177ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mount-munger" in pod "busybox-mount" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_mount_test.go:156: kubectl --context functional-20210817021007-1554185 logs busybox-mount -n default: exit status 1
functional_test_mount_test.go:157: failed waiting for busybox-mount pod: integration-test=busybox-mount within 4m0s: timed out waiting for the condition
functional_test_mount_test.go:83: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:84: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:84: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (282.83937ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=999,access=any,msize=65536,trans=tcp,noextend,port=43403)
	total 2
	-rw-r--r-- 1 docker docker 24 Aug 17 02:16 created-by-test
	-rw-r--r-- 1 docker docker 24 Aug 17 02:16 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Aug 17 02:16 test-1629166614689234424
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:86: debugging command "out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-20210817021007-1554185 /tmp/mounttest087931240:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:97: (dbg) [out/minikube-linux-arm64 mount -p functional-20210817021007-1554185 /tmp/mounttest087931240:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/mounttest087931240 into VM as /mount-9p ...
- Mount type:   
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Permissions:  755 (-rwxr-xr-x)
- Options:      map[]
- Bind Address: 192.168.49.1:43403
* Userspace file server: ufs starting
* Successfully mounted /tmp/mounttest087931240 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:97: (dbg) [out/minikube-linux-arm64 mount -p functional-20210817021007-1554185 /tmp/mounttest087931240:/mount-9p --alsologtostderr -v=1] stderr:
I0817 02:16:54.742069 1587228 out.go:298] Setting OutFile to fd 1 ...
I0817 02:16:54.742179 1587228 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0817 02:16:54.742183 1587228 out.go:311] Setting ErrFile to fd 2...
I0817 02:16:54.742186 1587228 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0817 02:16:54.742342 1587228 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
I0817 02:16:54.742539 1587228 mustload.go:65] Loading cluster: functional-20210817021007-1554185
I0817 02:16:54.742926 1587228 config.go:177] Loaded profile config "functional-20210817021007-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
I0817 02:16:54.743397 1587228 cli_runner.go:115] Run: docker container inspect functional-20210817021007-1554185 --format={{.State.Status}}
I0817 02:16:54.786404 1587228 host.go:66] Checking if "functional-20210817021007-1554185" exists ...
I0817 02:16:54.786695 1587228 cli_runner.go:115] Run: docker network inspect functional-20210817021007-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0817 02:16:54.831871 1587228 out.go:177] * Mounting host path /tmp/mounttest087931240 into VM as /mount-9p ...
I0817 02:16:54.834234 1587228 out.go:177]   - Mount type:   
I0817 02:16:54.836225 1587228 out.go:177]   - User ID:      docker
I0817 02:16:54.838631 1587228 out.go:177]   - Group ID:     docker
I0817 02:16:54.840578 1587228 out.go:177]   - Version:      9p2000.L
I0817 02:16:54.842445 1587228 out.go:177]   - Message Size: 262144
I0817 02:16:54.844760 1587228 out.go:177]   - Permissions:  755 (-rwxr-xr-x)
I0817 02:16:54.846846 1587228 out.go:177]   - Options:      map[]
I0817 02:16:54.848323 1587228 out.go:177]   - Bind Address: 192.168.49.1:43403
I0817 02:16:54.850626 1587228 out.go:177] * Userspace file server: 
I0817 02:16:54.848474 1587228 ssh_runner.go:149] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0817 02:16:54.850729 1587228 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210817021007-1554185
I0817 02:16:54.904892 1587228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50324 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/functional-20210817021007-1554185/id_rsa Username:docker}
I0817 02:16:55.022840 1587228 mount.go:169] unmount for /mount-9p ran successfully
I0817 02:16:55.022862 1587228 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -m 755 -p /mount-9p"
I0817 02:16:55.030832 1587228 ssh_runner.go:149] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=43403,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I0817 02:16:55.051163 1587228 main.go:116] stdlog: ufs.go:141 connected
I0817 02:16:55.053643 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tversion tag 65535 msize 65536 version '9P2000.L'
I0817 02:16:55.053700 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rversion tag 65535 msize 65536 version '9P2000'
I0817 02:16:55.054225 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I0817 02:16:55.054289 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rattach tag 0 aqid (43a73 51e5b89e 'd')
I0817 02:16:55.055185 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 0
I0817 02:16:55.055238 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('mounttest087931240' 'jenkins' 'jenkins' '' q (43a73 51e5b89e 'd') m d700 at 0 mt 1629166614 l 4096 t 0 d 0 ext )
I0817 02:16:55.057894 1587228 mount.go:94] mount successful: ""
I0817 02:16:55.060148 1587228 out.go:177] * Successfully mounted /tmp/mounttest087931240 to /mount-9p
I0817 02:16:55.061827 1587228 out.go:177] 
I0817 02:16:55.063521 1587228 out.go:177] * NOTE: This process must stay alive for the mount to be accessible ...
I0817 02:16:55.711743 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 0
I0817 02:16:55.711839 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('mounttest087931240' 'jenkins' 'jenkins' '' q (43a73 51e5b89e 'd') m d700 at 0 mt 1629166614 l 4096 t 0 d 0 ext )
I0817 02:16:55.978154 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 0
I0817 02:16:55.978219 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('mounttest087931240' 'jenkins' 'jenkins' '' q (43a73 51e5b89e 'd') m d700 at 0 mt 1629166614 l 4096 t 0 d 0 ext )
I0817 02:16:55.978555 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Twalk tag 0 fid 0 newfid 1 
I0817 02:16:55.978584 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rwalk tag 0 
I0817 02:16:55.978702 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Topen tag 0 fid 1 mode 0
I0817 02:16:55.978757 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Ropen tag 0 qid (43a73 51e5b89e 'd') iounit 0
I0817 02:16:55.978883 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 0
I0817 02:16:55.978915 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('mounttest087931240' 'jenkins' 'jenkins' '' q (43a73 51e5b89e 'd') m d700 at 0 mt 1629166614 l 4096 t 0 d 0 ext )
I0817 02:16:55.979037 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tread tag 0 fid 1 offset 0 count 65512
I0817 02:16:55.979135 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rread tag 0 count 258
I0817 02:16:55.979248 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tread tag 0 fid 1 offset 258 count 65254
I0817 02:16:55.979272 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rread tag 0 count 0
I0817 02:16:55.979374 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tread tag 0 fid 1 offset 258 count 65512
I0817 02:16:55.979395 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rread tag 0 count 0
I0817 02:16:55.979499 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0817 02:16:55.979530 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rwalk tag 0 (43a74 51e5b89e '') 
I0817 02:16:55.979647 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 2
I0817 02:16:55.979674 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (43a74 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:16:55.979792 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 2
I0817 02:16:55.979820 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (43a74 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:16:55.979919 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tclunk tag 0 fid 2
I0817 02:16:55.979939 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rclunk tag 0
I0817 02:16:55.980039 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Twalk tag 0 fid 0 newfid 2 0:'test-1629166614689234424' 
I0817 02:16:55.980071 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rwalk tag 0 (43a98 51e5b89e '') 
I0817 02:16:55.980166 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 2
I0817 02:16:55.980192 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('test-1629166614689234424' 'jenkins' 'jenkins' '' q (43a98 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:16:55.980294 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 2
I0817 02:16:55.980325 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('test-1629166614689234424' 'jenkins' 'jenkins' '' q (43a98 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:16:55.980428 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tclunk tag 0 fid 2
I0817 02:16:55.980450 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rclunk tag 0
I0817 02:16:55.980567 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0817 02:16:55.980615 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rwalk tag 0 (43a76 51e5b89e '') 
I0817 02:16:55.980738 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 2
I0817 02:16:55.980777 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (43a76 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:16:55.980885 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 2
I0817 02:16:55.980917 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (43a76 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:16:55.981017 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tclunk tag 0 fid 2
I0817 02:16:55.981035 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rclunk tag 0
I0817 02:16:55.981135 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tread tag 0 fid 1 offset 258 count 65512
I0817 02:16:55.981168 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rread tag 0 count 0
I0817 02:16:55.981260 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tclunk tag 0 fid 1
I0817 02:16:55.981287 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rclunk tag 0
I0817 02:16:56.245923 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Twalk tag 0 fid 0 newfid 1 0:'test-1629166614689234424' 
I0817 02:16:56.246006 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rwalk tag 0 (43a98 51e5b89e '') 
I0817 02:16:56.246131 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 1
I0817 02:16:56.246173 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('test-1629166614689234424' 'jenkins' 'jenkins' '' q (43a98 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:16:56.253525 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Twalk tag 0 fid 1 newfid 2 
I0817 02:16:56.253571 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rwalk tag 0 
I0817 02:16:56.253697 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Topen tag 0 fid 2 mode 0
I0817 02:16:56.253743 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Ropen tag 0 qid (43a98 51e5b89e '') iounit 0
I0817 02:16:56.253839 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 1
I0817 02:16:56.253875 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('test-1629166614689234424' 'jenkins' 'jenkins' '' q (43a98 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:16:56.253984 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tread tag 0 fid 2 offset 0 count 65512
I0817 02:16:56.254026 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rread tag 0 count 24
I0817 02:16:56.254121 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tread tag 0 fid 2 offset 24 count 65512
I0817 02:16:56.254142 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rread tag 0 count 0
I0817 02:16:56.254268 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tread tag 0 fid 2 offset 24 count 65512
I0817 02:16:56.254289 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rread tag 0 count 0
I0817 02:16:56.254555 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tclunk tag 0 fid 2
I0817 02:16:56.254599 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rclunk tag 0
I0817 02:16:56.254709 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tclunk tag 0 fid 1
I0817 02:16:56.254730 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rclunk tag 0
I0817 02:20:57.280835 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 0
I0817 02:20:57.280902 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('mounttest087931240' 'jenkins' 'jenkins' '' q (43a73 51e5b89e 'd') m d700 at 0 mt 1629166614 l 4096 t 0 d 0 ext )
I0817 02:20:57.281239 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Twalk tag 0 fid 0 newfid 1 
I0817 02:20:57.281272 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rwalk tag 0 
I0817 02:20:57.281380 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Topen tag 0 fid 1 mode 0
I0817 02:20:57.281418 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Ropen tag 0 qid (43a73 51e5b89e 'd') iounit 0
I0817 02:20:57.281518 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 0
I0817 02:20:57.281545 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('mounttest087931240' 'jenkins' 'jenkins' '' q (43a73 51e5b89e 'd') m d700 at 0 mt 1629166614 l 4096 t 0 d 0 ext )
I0817 02:20:57.281647 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tread tag 0 fid 1 offset 0 count 65512
I0817 02:20:57.281745 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rread tag 0 count 258
I0817 02:20:57.281854 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tread tag 0 fid 1 offset 258 count 65254
I0817 02:20:57.281894 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rread tag 0 count 0
I0817 02:20:57.281986 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tread tag 0 fid 1 offset 258 count 65512
I0817 02:20:57.282015 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rread tag 0 count 0
I0817 02:20:57.282127 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0817 02:20:57.282164 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rwalk tag 0 (43a74 51e5b89e '') 
I0817 02:20:57.282252 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 2
I0817 02:20:57.282284 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (43a74 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:20:57.289705 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 2
I0817 02:20:57.289749 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (43a74 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:20:57.289862 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tclunk tag 0 fid 2
I0817 02:20:57.289884 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rclunk tag 0
I0817 02:20:57.290003 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Twalk tag 0 fid 0 newfid 2 0:'test-1629166614689234424' 
I0817 02:20:57.290036 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rwalk tag 0 (43a98 51e5b89e '') 
I0817 02:20:57.290144 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 2
I0817 02:20:57.290171 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('test-1629166614689234424' 'jenkins' 'jenkins' '' q (43a98 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:20:57.297562 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 2
I0817 02:20:57.297605 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('test-1629166614689234424' 'jenkins' 'jenkins' '' q (43a98 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:20:57.297738 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tclunk tag 0 fid 2
I0817 02:20:57.297758 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rclunk tag 0
I0817 02:20:57.297878 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0817 02:20:57.297912 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rwalk tag 0 (43a76 51e5b89e '') 
I0817 02:20:57.298015 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 2
I0817 02:20:57.298043 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (43a76 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:20:57.305496 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tstat tag 0 fid 2
I0817 02:20:57.305560 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (43a76 51e5b89e '') m 644 at 0 mt 1629166614 l 24 t 0 d 0 ext )
I0817 02:20:57.305696 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tclunk tag 0 fid 2
I0817 02:20:57.305717 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rclunk tag 0
I0817 02:20:57.305825 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tread tag 0 fid 1 offset 258 count 65512
I0817 02:20:57.305850 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rread tag 0 count 0
I0817 02:20:57.305950 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tclunk tag 0 fid 1
I0817 02:20:57.305977 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rclunk tag 0
I0817 02:20:57.307213 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I0817 02:20:57.307259 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rerror tag 0 ename 'file not found' ecode 0
I0817 02:20:57.578896 1587228 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:50570 Tclunk tag 0 fid 0
I0817 02:20:57.578931 1587228 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:50570 Rclunk tag 0
I0817 02:20:57.603276 1587228 main.go:116] stdlog: ufs.go:147 disconnected
I0817 02:20:57.615227 1587228 out.go:177] * Unmounting /mount-9p ...
I0817 02:20:57.615252 1587228 ssh_runner.go:149] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0817 02:20:57.623098 1587228 mount.go:169] unmount for /mount-9p ran successfully
I0817 02:20:57.624703 1587228 out.go:177] 
W0817 02:20:57.624839 1587228 out.go:242] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I0817 02:20:57.626679 1587228 out.go:177] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (243.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (103.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
E0817 02:18:14.884056 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:18:42.568228 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
functional_test_tunnel_test.go:218: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:220: (dbg) Run:  kubectl --context functional-20210817021007-1554185 get svc nginx-svc
functional_test_tunnel_test.go:224: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.97.155.188   10.97.155.188   80:32091/TCP   5m44s
functional_test_tunnel_test.go:231: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (103.51s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (4.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.551258159.exe start -p running-upgrade-20210817024444-1554185 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.551258159.exe start -p running-upgrade-20210817024444-1554185 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 65 (81.332866ms)

                                                
                                                
-- stdout --
	* [running-upgrade-20210817024444-1554185] minikube v1.16.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - KUBECONFIG=/tmp/legacy_kubeconfig101463490
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	* Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: docker driver is not supported on "arm64" systems yet
	* Suggestion: Try other drivers
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.551258159.exe start -p running-upgrade-20210817024444-1554185 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.551258159.exe start -p running-upgrade-20210817024444-1554185 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 65 (100.257588ms)

                                                
                                                
-- stdout --
	* [running-upgrade-20210817024444-1554185] minikube v1.16.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - KUBECONFIG=/tmp/legacy_kubeconfig573651001
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	* Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: docker driver is not supported on "arm64" systems yet
	* Suggestion: Try other drivers
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.551258159.exe start -p running-upgrade-20210817024444-1554185 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.551258159.exe start -p running-upgrade-20210817024444-1554185 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 65 (71.41865ms)

                                                
                                                
-- stdout --
	* [running-upgrade-20210817024444-1554185] minikube v1.16.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - KUBECONFIG=/tmp/legacy_kubeconfig151597636
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	* Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: docker driver is not supported on "arm64" systems yet
	* Suggestion: Try other drivers
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.16.0 start failed: exit status 65
panic.go:613: *** TestRunningBinaryUpgrade FAILED at 2021-08-17 02:44:49.061005763 +0000 UTC m=+3319.855753555
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect running-upgrade-20210817024444-1554185
helpers_test.go:232: (dbg) Non-zero exit: docker inspect running-upgrade-20210817024444-1554185: exit status 1 (41.15776ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: running-upgrade-20210817024444-1554185

                                                
                                                
** /stderr **
helpers_test.go:234: failed to get docker inspect: exit status 1
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-20210817024444-1554185 -n running-upgrade-20210817024444-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-20210817024444-1554185 -n running-upgrade-20210817024444-1554185: exit status 85 (61.761833ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-20210817024444-1554185" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-20210817024444-1554185"

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 85 (may be ok)
helpers_test.go:242: "running-upgrade-20210817024444-1554185" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-20210817024444-1554185\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-20210817024444-1554185\"")
helpers_test.go:176: Cleaning up "running-upgrade-20210817024444-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-20210817024444-1554185
--- FAIL: TestRunningBinaryUpgrade (4.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade (4.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade
=== PAUSE TestStoppedBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.16.0.646085067.exe start -p stopped-upgrade-20210817024440-1554185 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:186: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.646085067.exe start -p stopped-upgrade-20210817024440-1554185 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 65 (93.979668ms)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210817024440-1554185] minikube v1.16.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - KUBECONFIG=/tmp/legacy_kubeconfig143372974
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	* Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: docker driver is not supported on "arm64" systems yet
	* Suggestion: Try other drivers
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.16.0.646085067.exe start -p stopped-upgrade-20210817024440-1554185 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:186: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.646085067.exe start -p stopped-upgrade-20210817024440-1554185 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 65 (72.441663ms)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210817024440-1554185] minikube v1.16.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - KUBECONFIG=/tmp/legacy_kubeconfig724650549
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	* Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: docker driver is not supported on "arm64" systems yet
	* Suggestion: Try other drivers
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.16.0.646085067.exe start -p stopped-upgrade-20210817024440-1554185 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:186: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.646085067.exe start -p stopped-upgrade-20210817024440-1554185 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 65 (70.335362ms)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210817024440-1554185] minikube v1.16.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - KUBECONFIG=/tmp/legacy_kubeconfig727603728
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	* Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: docker driver is not supported on "arm64" systems yet
	* Suggestion: Try other drivers
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
version_upgrade_test.go:192: legacy v1.16.0 start failed: exit status 65
panic.go:613: *** TestStoppedBinaryUpgrade FAILED at 2021-08-17 02:44:44.397987496 +0000 UTC m=+3315.192735288
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStoppedBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect stopped-upgrade-20210817024440-1554185
helpers_test.go:232: (dbg) Non-zero exit: docker inspect stopped-upgrade-20210817024440-1554185: exit status 1 (40.769809ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: stopped-upgrade-20210817024440-1554185

                                                
                                                
** /stderr **
helpers_test.go:234: failed to get docker inspect: exit status 1
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p stopped-upgrade-20210817024440-1554185 -n stopped-upgrade-20210817024440-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p stopped-upgrade-20210817024440-1554185 -n stopped-upgrade-20210817024440-1554185: exit status 85 (62.152278ms)

                                                
                                                
-- stdout --
	* Profile "stopped-upgrade-20210817024440-1554185" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-20210817024440-1554185"

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 85 (may be ok)
helpers_test.go:242: "stopped-upgrade-20210817024440-1554185" host is not running, skipping log retrieval (state="* Profile \"stopped-upgrade-20210817024440-1554185\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p stopped-upgrade-20210817024440-1554185\"")
helpers_test.go:176: Cleaning up "stopped-upgrade-20210817024440-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p stopped-upgrade-20210817024440-1554185
--- FAIL: TestStoppedBinaryUpgrade (4.54s)

                                                
                                    
x
+
TestMissingContainerUpgrade (78.6s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.900610332.exe start -p missing-upgrade-20210817024148-1554185 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:311: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.900610332.exe start -p missing-upgrade-20210817024148-1554185 --memory=2200 --driver=docker  --container-runtime=containerd: exit status 70 (56.765060869s)

                                                
                                                
-- stdout --
	! [missing-upgrade-20210817024148-1554185] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20210817024148-1554185
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7845MB available) ...
	* Deleting "missing-upgrade-20210817024148-1554185" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7845MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.22.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.22.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: check container "missing-upgrade-20210817024148-1554185" running: temporary error created container "missing-upgrade-20210817024148-1554185" is not running yet
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20210817024148-1554185" may fix it.: creating host: create: creating: create kic node: check container "missing-upgrade-20210817024148-1554185" running: temporary error created container "missing-upgrade-20210817024148-1554185" is not running yet
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.900610332.exe start -p missing-upgrade-20210817024148-1554185 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:311: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.900610332.exe start -p missing-upgrade-20210817024148-1554185 --memory=2200 --driver=docker  --container-runtime=containerd: exit status 70 (6.497703459s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20210817024148-1554185] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20210817024148-1554185
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-20210817024148-1554185" ...
	* Restarting existing docker container for "missing-upgrade-20210817024148-1554185" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210817024148-1554185", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20210817024148-1554185" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210817024148-1554185", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.900610332.exe start -p missing-upgrade-20210817024148-1554185 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:311: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.900610332.exe start -p missing-upgrade-20210817024148-1554185 --memory=2200 --driver=docker  --container-runtime=containerd: exit status 70 (6.421942363s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20210817024148-1554185] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20210817024148-1554185
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-20210817024148-1554185" ...
	* Restarting existing docker container for "missing-upgrade-20210817024148-1554185" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210817024148-1554185", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20210817024148-1554185" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210817024148-1554185", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: release start failed: exit status 70
panic.go:613: *** TestMissingContainerUpgrade FAILED at 2021-08-17 02:43:02.319830692 +0000 UTC m=+3213.114578493
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect missing-upgrade-20210817024148-1554185
helpers_test.go:236: (dbg) docker inspect missing-upgrade-20210817024148-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e92277a5c93d0433a175bec0a72a52b30c160dd8a80ffd1be2f51b7b819abea5",
	        "Created": "2021-08-17T02:42:28.599184749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 1,
	            "Error": "",
	            "StartedAt": "2021-08-17T02:43:02.095065259Z",
	            "FinishedAt": "2021-08-17T02:43:02.089551294Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/e92277a5c93d0433a175bec0a72a52b30c160dd8a80ffd1be2f51b7b819abea5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e92277a5c93d0433a175bec0a72a52b30c160dd8a80ffd1be2f51b7b819abea5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e92277a5c93d0433a175bec0a72a52b30c160dd8a80ffd1be2f51b7b819abea5/hosts",
	        "LogPath": "/var/lib/docker/containers/e92277a5c93d0433a175bec0a72a52b30c160dd8a80ffd1be2f51b7b819abea5/e92277a5c93d0433a175bec0a72a52b30c160dd8a80ffd1be2f51b7b819abea5-json.log",
	        "Name": "/missing-upgrade-20210817024148-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-20210817024148-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/265ed45553b581648bed3d6cf586155ddb5648628efd0260abcb746f09635394-init/diff:/var/lib/docker/overlay2/a8a0d16d0e78f3c51921c8c71e1a8dcf4bb76b55ed8b23b65cb057f85bee34d9/diff:/var/lib/docker/overlay2/2011b6394d457ccb7c13e06be769097a1b7ffe91e57a7c534778620176e04bd2/diff:/var/lib/docker/overlay2/95fffeb379cca40fb3ded85c751393f8c9c0e6b2e5f9ffeb8574933dae0fbf29/diff:/var/lib/docker/overlay2/9036f6de28fc3e54abbe738f95943f3a8767b3ed2797af132e808fb5ca77fe56/diff:/var/lib/docker/overlay2/a92fe25846694bc32ba0d69aebdb9f788b4d4a9b9084e35820ee18920345d85d/diff:/var/lib/docker/overlay2/7360092a6a57f6d89a20f7b1d199fe8a62f5f1d5bc2177f4950788779375d826/diff:/var/lib/docker/overlay2/623d47354dfbb07335ff9e5d57c49b6d072efc9039a9150a286c96f6152d1d91/diff:/var/lib/docker/overlay2/ee3e10d1bbec0476856c4c76cff11a8b591cc595044b89d01a20df9618f2d388/diff:/var/lib/docker/overlay2/77d1bc55da15800d45b91396ead7bfa79cb4e1a5ff7a7c624b3bc9470830b70a/diff:/var/lib/docker/overlay2/ff8a0a
f71429c0fb2bb967eb1cc6874db4e7d753a7e05cfc6eeb2c7093f3ec81/diff:/var/lib/docker/overlay2/e2ea0d2cda096d5e6af436ac8e91c56b30d3beeaad0c5af003a58106466cb43e/diff:/var/lib/docker/overlay2/292f4d06709b59c516da30a70709be3806d0839fcd0dadbfb81adbaa3f305950/diff:/var/lib/docker/overlay2/063500ec9915b4a50b4885a4af909aff4e8c8c90e3eae37ba9b594448b26b00d/diff:/var/lib/docker/overlay2/7b8cc6d37e71df890ddb58f0b04af0c24ca79b4dfa0096dedaee91b5f00ed6fd/diff:/var/lib/docker/overlay2/6204d7a05f3d82fe96b56c1cfaf47d1288634a4c379d96e44f5c8d166e588209/diff:/var/lib/docker/overlay2/610a3e82b54e758129dfbd261e0b8391f6b37f0e8224902db6e22d0c33942e06/diff:/var/lib/docker/overlay2/a3d736db6f45b33b4abceb5fc09ddb58e04d0fcedd73c39110d92e9355fab2cb/diff:/var/lib/docker/overlay2/11d112a977fd7a1f875e52320fe1198b22fbe29b1da75556c5cbea2c148acbbe/diff:/var/lib/docker/overlay2/b30cd03db3e0dff9ecf8e2d92bdde1c47ad05fe8bd4a8777ad8f7eb024414e51/diff:/var/lib/docker/overlay2/be6bb793d0c2ed87a9d785e1dbd92e1c0f4e4a85338bd5f8f3b1aae55fe73a80/diff:/var/lib/d
ocker/overlay2/d93ad42cd76886c2b1e415bc71d75ef158d1b80508704e6863a974dc7272ea95/diff",
	                "MergedDir": "/var/lib/docker/overlay2/265ed45553b581648bed3d6cf586155ddb5648628efd0260abcb746f09635394/merged",
	                "UpperDir": "/var/lib/docker/overlay2/265ed45553b581648bed3d6cf586155ddb5648628efd0260abcb746f09635394/diff",
	                "WorkDir": "/var/lib/docker/overlay2/265ed45553b581648bed3d6cf586155ddb5648628efd0260abcb746f09635394/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20210817024148-1554185",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20210817024148-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20210817024148-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20210817024148-1554185",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20210817024148-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "535e224b5ee023e9dd6aff82d163ecf2e235f68b030d90fa88b2ef56e71f10df",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/535e224b5ee0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "c3c08d57184b3bf3f4ef50e28c25b24b38c8fee76d8e3215d1771ccf0d43b6cf",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-20210817024148-1554185 -n missing-upgrade-20210817024148-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-20210817024148-1554185 -n missing-upgrade-20210817024148-1554185: exit status 7 (94.400139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "missing-upgrade-20210817024148-1554185" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "missing-upgrade-20210817024148-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-20210817024148-1554185
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-20210817024148-1554185: (4.929270151s)
--- FAIL: TestMissingContainerUpgrade (78.60s)

                                                
                                    
x
+
TestPause/serial/Pause (5.96s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-20210817024148-1554185 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-20210817024148-1554185 --alsologtostderr -v=5: exit status 80 (1.880315405s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210817024148-1554185 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 02:44:12.848726 1663745 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:44:12.849154 1663745 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:44:12.849168 1663745 out.go:311] Setting ErrFile to fd 2...
	I0817 02:44:12.849172 1663745 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:44:12.849716 1663745 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:44:12.849955 1663745 out.go:305] Setting JSON to false
	I0817 02:44:12.849986 1663745 mustload.go:65] Loading cluster: pause-20210817024148-1554185
	I0817 02:44:12.850557 1663745 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:44:12.851334 1663745 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:12.892337 1663745 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:12.893059 1663745 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210817024148-1554185 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0817 02:44:12.896172 1663745 out.go:177] * Pausing node pause-20210817024148-1554185 ... 
	I0817 02:44:12.896195 1663745 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:12.896509 1663745 ssh_runner.go:149] Run: systemctl --version
	I0817 02:44:12.896559 1663745 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:44:12.927952 1663745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:44:13.030467 1663745 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:13.038979 1663745 pause.go:50] kubelet running: true
	I0817 02:44:13.039028 1663745 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 02:44:13.177797 1663745 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 02:44:13.177877 1663745 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 02:44:13.252883 1663745 cri.go:76] found id: "cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42"
	I0817 02:44:13.252906 1663745 cri.go:76] found id: "6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a"
	I0817 02:44:13.252911 1663745 cri.go:76] found id: "335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0"
	I0817 02:44:13.252916 1663745 cri.go:76] found id: "aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66"
	I0817 02:44:13.252920 1663745 cri.go:76] found id: "ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:13.252926 1663745 cri.go:76] found id: "f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367"
	I0817 02:44:13.252933 1663745 cri.go:76] found id: "fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583"
	I0817 02:44:13.252940 1663745 cri.go:76] found id: "63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08"
	I0817 02:44:13.252947 1663745 cri.go:76] found id: ""
	I0817 02:44:13.252998 1663745 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:44:13.288960 1663745 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","pid":1575,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0/rootfs","created":"2021-08-17T02:43:04.897293883Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","pid":919,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60/rootfs","created":"2021-08-17T02:42:39.314001317Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210817024148-1554185_ed0234c9cb81abc8dc5bbcdfaf787883"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d","pid":2456,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d/rootfs","created":"2021-08-17T02:44:11.973325641Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea70277
5058ebdb266d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_562918b9-84e2-4f7e-9a0a-70742893e39d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","pid":1063,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08/rootfs","created":"2021-08-17T02:42:39.515581049Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","pid":1895,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f9
6ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a/rootfs","created":"2021-08-17T02:43:53.705600139Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb/rootfs","created":"2021-08-17T02:42:39.34752748Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26
e98b77ee1b2764d9241bbbb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210817024148-1554185_0718a393987d1be3d2ae606f942a3f97"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","pid":1512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa/rootfs","created":"2021-08-17T02:43:04.691568232Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-h6fvl_e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926e
c05e9","pid":991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9/rootfs","created":"2021-08-17T02:42:39.402749597Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210817024148-1554185_a97ec150b358b2334cd33dc4c454d661"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66","pid":1576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a9
21033b3042e86497dd8533ae66/rootfs","created":"2021-08-17T02:43:04.899987912Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","pid":1861,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec/rootfs","created":"2021-08-17T02:43:53.53831483Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-bzchw_5c5ce93f-6aa2-4786-b8c6-a4a48b4d9f
fc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","pid":931,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9/rootfs","created":"2021-08-17T02:42:39.343864386Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210817024148-1554185_b3d802d137ed3edc177474465272f732"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42","pid":2488,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402
115529d42","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42/rootfs","created":"2021-08-17T02:44:12.067979642Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","pid":1505,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c/rootfs","created":"2021-08-17T02:43:04.691547802Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","io.ku
bernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9lnwm_8b4c4c45-1613-49c8-9b03-e13120205af4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2","pid":1140,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/rootfs","created":"2021-08-17T02:42:39.62877376Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","pid":1098,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c92
61c63e9d4a7dc6fe54ab966a367","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367/rootfs","created":"2021-08-17T02:42:39.549869273Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","pid":1061,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583/rootfs","created":"2021-08-17T02:42:39.510915707Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3daddb
ac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60"},"owner":"root"}]
	I0817 02:44:13.289204 1663745 cri.go:113] list returned 16 containers
	I0817 02:44:13.289216 1663745 cri.go:116] container: {ID:335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 Status:running}
	I0817 02:44:13.289234 1663745 cri.go:116] container: {ID:3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 Status:running}
	I0817 02:44:13.289242 1663745 cri.go:118] skipping 3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 - not in ps
	I0817 02:44:13.289247 1663745 cri.go:116] container: {ID:62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d Status:running}
	I0817 02:44:13.289259 1663745 cri.go:118] skipping 62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d - not in ps
	I0817 02:44:13.289264 1663745 cri.go:116] container: {ID:63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 Status:running}
	I0817 02:44:13.289272 1663745 cri.go:116] container: {ID:6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a Status:running}
	I0817 02:44:13.289279 1663745 cri.go:116] container: {ID:73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb Status:running}
	I0817 02:44:13.289287 1663745 cri.go:118] skipping 73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb - not in ps
	I0817 02:44:13.289293 1663745 cri.go:116] container: {ID:771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa Status:running}
	I0817 02:44:13.289300 1663745 cri.go:118] skipping 771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa - not in ps
	I0817 02:44:13.289304 1663745 cri.go:116] container: {ID:7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 Status:running}
	I0817 02:44:13.289312 1663745 cri.go:118] skipping 7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 - not in ps
	I0817 02:44:13.289316 1663745 cri.go:116] container: {ID:aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 Status:running}
	I0817 02:44:13.289324 1663745 cri.go:116] container: {ID:b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec Status:running}
	I0817 02:44:13.289329 1663745 cri.go:118] skipping b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec - not in ps
	I0817 02:44:13.289341 1663745 cri.go:116] container: {ID:bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 Status:running}
	I0817 02:44:13.289346 1663745 cri.go:118] skipping bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 - not in ps
	I0817 02:44:13.289351 1663745 cri.go:116] container: {ID:cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42 Status:running}
	I0817 02:44:13.289360 1663745 cri.go:116] container: {ID:ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c Status:running}
	I0817 02:44:13.289365 1663745 cri.go:118] skipping ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c - not in ps
	I0817 02:44:13.289373 1663745 cri.go:116] container: {ID:ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 Status:running}
	I0817 02:44:13.289378 1663745 cri.go:116] container: {ID:f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 Status:running}
	I0817 02:44:13.289387 1663745 cri.go:116] container: {ID:fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 Status:running}
	I0817 02:44:13.289433 1663745 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0
	I0817 02:44:13.303223 1663745 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08
	I0817 02:44:13.315164 1663745 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T02:44:13Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0817 02:44:13.592451 1663745 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:13.601782 1663745 pause.go:50] kubelet running: false
	I0817 02:44:13.601850 1663745 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 02:44:13.715919 1663745 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 02:44:13.716025 1663745 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 02:44:13.793671 1663745 cri.go:76] found id: "cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42"
	I0817 02:44:13.793702 1663745 cri.go:76] found id: "6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a"
	I0817 02:44:13.793708 1663745 cri.go:76] found id: "335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0"
	I0817 02:44:13.793713 1663745 cri.go:76] found id: "aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66"
	I0817 02:44:13.793717 1663745 cri.go:76] found id: "ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:13.793723 1663745 cri.go:76] found id: "f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367"
	I0817 02:44:13.793732 1663745 cri.go:76] found id: "fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583"
	I0817 02:44:13.793738 1663745 cri.go:76] found id: "63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08"
	I0817 02:44:13.793746 1663745 cri.go:76] found id: ""
	I0817 02:44:13.793801 1663745 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:44:13.830793 1663745 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","pid":1575,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0/rootfs","created":"2021-08-17T02:43:04.897293883Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","pid":919,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","rootfs":"/run/containerd/io.containerd.runtime.
v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60/rootfs","created":"2021-08-17T02:42:39.314001317Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210817024148-1554185_ed0234c9cb81abc8dc5bbcdfaf787883"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d","pid":2456,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d/rootfs","created":"2021-08-17T02:44:11.973325641Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775
058ebdb266d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_562918b9-84e2-4f7e-9a0a-70742893e39d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","pid":1063,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08/rootfs","created":"2021-08-17T02:42:39.515581049Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","pid":1895,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96
ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a/rootfs","created":"2021-08-17T02:43:53.705600139Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb/rootfs","created":"2021-08-17T02:42:39.34752748Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e
98b77ee1b2764d9241bbbb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210817024148-1554185_0718a393987d1be3d2ae606f942a3f97"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","pid":1512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa/rootfs","created":"2021-08-17T02:43:04.691568232Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-h6fvl_e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec
05e9","pid":991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9/rootfs","created":"2021-08-17T02:42:39.402749597Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210817024148-1554185_a97ec150b358b2334cd33dc4c454d661"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66","pid":1576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a92
1033b3042e86497dd8533ae66/rootfs","created":"2021-08-17T02:43:04.899987912Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","pid":1861,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec/rootfs","created":"2021-08-17T02:43:53.53831483Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-bzchw_5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ff
c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","pid":931,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9/rootfs","created":"2021-08-17T02:42:39.343864386Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210817024148-1554185_b3d802d137ed3edc177474465272f732"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42","pid":2488,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa4021
15529d42","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42/rootfs","created":"2021-08-17T02:44:12.067979642Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","pid":1505,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c/rootfs","created":"2021-08-17T02:43:04.691547802Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","io.kub
ernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9lnwm_8b4c4c45-1613-49c8-9b03-e13120205af4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2","pid":1140,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/rootfs","created":"2021-08-17T02:42:39.62877376Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","pid":1098,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c926
1c63e9d4a7dc6fe54ab966a367","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367/rootfs","created":"2021-08-17T02:42:39.549869273Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","pid":1061,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583/rootfs","created":"2021-08-17T02:42:39.510915707Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3daddba
c69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60"},"owner":"root"}]
	I0817 02:44:13.831028 1663745 cri.go:113] list returned 16 containers
	I0817 02:44:13.831043 1663745 cri.go:116] container: {ID:335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 Status:paused}
	I0817 02:44:13.831055 1663745 cri.go:122] skipping {335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 paused}: state = "paused", want "running"
	I0817 02:44:13.831070 1663745 cri.go:116] container: {ID:3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 Status:running}
	I0817 02:44:13.831075 1663745 cri.go:118] skipping 3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 - not in ps
	I0817 02:44:13.831083 1663745 cri.go:116] container: {ID:62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d Status:running}
	I0817 02:44:13.831088 1663745 cri.go:118] skipping 62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d - not in ps
	I0817 02:44:13.831092 1663745 cri.go:116] container: {ID:63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 Status:running}
	I0817 02:44:13.831099 1663745 cri.go:116] container: {ID:6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a Status:running}
	I0817 02:44:13.831108 1663745 cri.go:116] container: {ID:73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb Status:running}
	I0817 02:44:13.831113 1663745 cri.go:118] skipping 73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb - not in ps
	I0817 02:44:13.831118 1663745 cri.go:116] container: {ID:771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa Status:running}
	I0817 02:44:13.831129 1663745 cri.go:118] skipping 771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa - not in ps
	I0817 02:44:13.831133 1663745 cri.go:116] container: {ID:7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 Status:running}
	I0817 02:44:13.831138 1663745 cri.go:118] skipping 7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 - not in ps
	I0817 02:44:13.831143 1663745 cri.go:116] container: {ID:aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 Status:running}
	I0817 02:44:13.831148 1663745 cri.go:116] container: {ID:b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec Status:running}
	I0817 02:44:13.831153 1663745 cri.go:118] skipping b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec - not in ps
	I0817 02:44:13.831162 1663745 cri.go:116] container: {ID:bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 Status:running}
	I0817 02:44:13.831169 1663745 cri.go:118] skipping bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 - not in ps
	I0817 02:44:13.831177 1663745 cri.go:116] container: {ID:cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42 Status:running}
	I0817 02:44:13.831182 1663745 cri.go:116] container: {ID:ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c Status:running}
	I0817 02:44:13.831192 1663745 cri.go:118] skipping ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c - not in ps
	I0817 02:44:13.831196 1663745 cri.go:116] container: {ID:ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 Status:running}
	I0817 02:44:13.831201 1663745 cri.go:116] container: {ID:f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 Status:running}
	I0817 02:44:13.831210 1663745 cri.go:116] container: {ID:fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 Status:running}
	I0817 02:44:13.831257 1663745 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08
	I0817 02:44:13.845278 1663745 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a
	I0817 02:44:13.858574 1663745 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T02:44:13Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0817 02:44:14.399948 1663745 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:14.409205 1663745 pause.go:50] kubelet running: false
	I0817 02:44:14.409268 1663745 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 02:44:14.518735 1663745 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 02:44:14.518848 1663745 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 02:44:14.594117 1663745 cri.go:76] found id: "cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42"
	I0817 02:44:14.594166 1663745 cri.go:76] found id: "6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a"
	I0817 02:44:14.594178 1663745 cri.go:76] found id: "335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0"
	I0817 02:44:14.594183 1663745 cri.go:76] found id: "aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66"
	I0817 02:44:14.594198 1663745 cri.go:76] found id: "ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:14.594213 1663745 cri.go:76] found id: "f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367"
	I0817 02:44:14.594217 1663745 cri.go:76] found id: "fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583"
	I0817 02:44:14.594222 1663745 cri.go:76] found id: "63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08"
	I0817 02:44:14.594226 1663745 cri.go:76] found id: ""
	I0817 02:44:14.594270 1663745 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:44:14.629135 1663745 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","pid":1575,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0/rootfs","created":"2021-08-17T02:43:04.897293883Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","pid":919,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","rootfs":"/run/containerd/io.containerd.runtime.
v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60/rootfs","created":"2021-08-17T02:42:39.314001317Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210817024148-1554185_ed0234c9cb81abc8dc5bbcdfaf787883"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d","pid":2456,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d/rootfs","created":"2021-08-17T02:44:11.973325641Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775
058ebdb266d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_562918b9-84e2-4f7e-9a0a-70742893e39d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","pid":1063,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08/rootfs","created":"2021-08-17T02:42:39.515581049Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","pid":1895,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96c
eaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a/rootfs","created":"2021-08-17T02:43:53.705600139Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb/rootfs","created":"2021-08-17T02:42:39.34752748Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e9
8b77ee1b2764d9241bbbb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210817024148-1554185_0718a393987d1be3d2ae606f942a3f97"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","pid":1512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa/rootfs","created":"2021-08-17T02:43:04.691568232Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-h6fvl_e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec0
5e9","pid":991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9/rootfs","created":"2021-08-17T02:42:39.402749597Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210817024148-1554185_a97ec150b358b2334cd33dc4c454d661"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66","pid":1576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921
033b3042e86497dd8533ae66/rootfs","created":"2021-08-17T02:43:04.899987912Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","pid":1861,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec/rootfs","created":"2021-08-17T02:43:53.53831483Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-bzchw_5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc
"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","pid":931,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9/rootfs","created":"2021-08-17T02:42:39.343864386Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210817024148-1554185_b3d802d137ed3edc177474465272f732"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42","pid":2488,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa40211
5529d42","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42/rootfs","created":"2021-08-17T02:44:12.067979642Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","pid":1505,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c/rootfs","created":"2021-08-17T02:43:04.691547802Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","io.kube
rnetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9lnwm_8b4c4c45-1613-49c8-9b03-e13120205af4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2","pid":1140,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/rootfs","created":"2021-08-17T02:42:39.62877376Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","pid":1098,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261
c63e9d4a7dc6fe54ab966a367","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367/rootfs","created":"2021-08-17T02:42:39.549869273Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","pid":1061,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583/rootfs","created":"2021-08-17T02:42:39.510915707Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3daddbac
69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60"},"owner":"root"}]
	I0817 02:44:14.629349 1663745 cri.go:113] list returned 16 containers
	I0817 02:44:14.629362 1663745 cri.go:116] container: {ID:335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 Status:paused}
	I0817 02:44:14.629372 1663745 cri.go:122] skipping {335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 paused}: state = "paused", want "running"
	I0817 02:44:14.629382 1663745 cri.go:116] container: {ID:3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 Status:running}
	I0817 02:44:14.629391 1663745 cri.go:118] skipping 3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 - not in ps
	I0817 02:44:14.629402 1663745 cri.go:116] container: {ID:62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d Status:running}
	I0817 02:44:14.629409 1663745 cri.go:118] skipping 62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d - not in ps
	I0817 02:44:14.629418 1663745 cri.go:116] container: {ID:63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 Status:paused}
	I0817 02:44:14.629424 1663745 cri.go:122] skipping {63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 paused}: state = "paused", want "running"
	I0817 02:44:14.629433 1663745 cri.go:116] container: {ID:6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a Status:running}
	I0817 02:44:14.629438 1663745 cri.go:116] container: {ID:73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb Status:running}
	I0817 02:44:14.629443 1663745 cri.go:118] skipping 73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb - not in ps
	I0817 02:44:14.629449 1663745 cri.go:116] container: {ID:771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa Status:running}
	I0817 02:44:14.629457 1663745 cri.go:118] skipping 771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa - not in ps
	I0817 02:44:14.629461 1663745 cri.go:116] container: {ID:7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 Status:running}
	I0817 02:44:14.629469 1663745 cri.go:118] skipping 7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 - not in ps
	I0817 02:44:14.629482 1663745 cri.go:116] container: {ID:aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 Status:running}
	I0817 02:44:14.629487 1663745 cri.go:116] container: {ID:b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec Status:running}
	I0817 02:44:14.629496 1663745 cri.go:118] skipping b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec - not in ps
	I0817 02:44:14.629500 1663745 cri.go:116] container: {ID:bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 Status:running}
	I0817 02:44:14.629510 1663745 cri.go:118] skipping bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 - not in ps
	I0817 02:44:14.629514 1663745 cri.go:116] container: {ID:cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42 Status:running}
	I0817 02:44:14.629521 1663745 cri.go:116] container: {ID:ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c Status:running}
	I0817 02:44:14.629528 1663745 cri.go:118] skipping ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c - not in ps
	I0817 02:44:14.629532 1663745 cri.go:116] container: {ID:ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 Status:running}
	I0817 02:44:14.629537 1663745 cri.go:116] container: {ID:f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 Status:running}
	I0817 02:44:14.629542 1663745 cri.go:116] container: {ID:fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 Status:running}
	I0817 02:44:14.629590 1663745 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a
	I0817 02:44:14.642439 1663745 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66
	I0817 02:44:14.657111 1663745 out.go:177] 
	W0817 02:44:14.657271 1663745 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T02:44:14Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T02:44:14Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0817 02:44:14.657288 1663745 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0817 02:44:14.665527 1663745 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_2.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_2.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0817 02:44:14.667106 1663745 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-20210817024148-1554185 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210817024148-1554185
helpers_test.go:236: (dbg) docker inspect pause-20210817024148-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b",
	        "Created": "2021-08-17T02:41:50.320902147Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1657229,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T02:41:51.004888651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/hosts",
	        "LogPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b-json.log",
	        "Name": "/pause-20210817024148-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-20210817024148-1554185:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210817024148-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20210817024148-1554185",
	                "Source": "/var/lib/docker/volumes/pause-20210817024148-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210817024148-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210817024148-1554185",
	                "name.minikube.sigs.k8s.io": "pause-20210817024148-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d8a8e6cd79da22a5765a51578b1ea6e8efa8e27c6c5dbb571e80d79023db3847",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50410"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50409"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50406"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50408"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50407"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d8a8e6cd79da",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210817024148-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b9b1ad2a3171",
	                        "pause-20210817024148-1554185"
	                    ],
	                    "NetworkID": "747733296a426a6f52daff293191c7fb9ea960ba5380b91809f97050286a1932",
	                    "EndpointID": "54b17c5460167eb93db2a6807c51835973c485175b87062600e587d432698b14",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185: exit status 2 (313.230615ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p pause-20210817024148-1554185 logs -n 25
helpers_test.go:253: TestPause/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                             Args                              |                   Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| profile | list --output json                                            | minikube                                    | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:46 UTC | Tue, 17 Aug 2021 02:29:47 UTC |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:47 UTC | Tue, 17 Aug 2021 02:29:48 UTC |
	|         | cp testdata/cp-test.txt                                       |                                             |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                      |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:48 UTC | Tue, 17 Aug 2021 02:29:48 UTC |
	|         | ssh sudo cat                                                  |                                             |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                      |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185 cp testdata/cp-test.txt      | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:48 UTC | Tue, 17 Aug 2021 02:29:48 UTC |
	|         | multinode-20210817022620-1554185-m02:/home/docker/cp-test.txt |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:48 UTC | Tue, 17 Aug 2021 02:29:48 UTC |
	|         | ssh -n                                                        |                                             |         |         |                               |                               |
	|         | multinode-20210817022620-1554185-m02                          |                                             |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                             |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185 cp testdata/cp-test.txt      | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:29:49 UTC |
	|         | multinode-20210817022620-1554185-m03:/home/docker/cp-test.txt |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:29:49 UTC |
	|         | ssh -n                                                        |                                             |         |         |                               |                               |
	|         | multinode-20210817022620-1554185-m03                          |                                             |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                             |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:30:09 UTC |
	|         | node stop m03                                                 |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:30:10 UTC | Tue, 17 Aug 2021 02:30:40 UTC |
	|         | node start m03 --alsologtostderr                              |                                             |         |         |                               |                               |
	| stop    | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:30:41 UTC | Tue, 17 Aug 2021 02:31:41 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	| start   | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:31:41 UTC | Tue, 17 Aug 2021 02:34:02 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	|         | --wait=true -v=8                                              |                                             |         |         |                               |                               |
	|         | --alsologtostderr                                             |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:34:02 UTC | Tue, 17 Aug 2021 02:34:26 UTC |
	|         | node delete m03                                               |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:34:27 UTC | Tue, 17 Aug 2021 02:35:07 UTC |
	|         | stop                                                          |                                             |         |         |                               |                               |
	| start   | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:35:07 UTC | Tue, 17 Aug 2021 02:36:46 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	|         | --wait=true -v=8                                              |                                             |         |         |                               |                               |
	|         | --alsologtostderr                                             |                                             |         |         |                               |                               |
	|         | --driver=docker                                               |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| start   | -p                                                            | multinode-20210817022620-1554185-m03        | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:36:47 UTC | Tue, 17 Aug 2021 02:37:57 UTC |
	|         | multinode-20210817022620-1554185-m03                          |                                             |         |         |                               |                               |
	|         | --driver=docker                                               |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| delete  | -p                                                            | multinode-20210817022620-1554185-m03        | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:37:57 UTC | Tue, 17 Aug 2021 02:38:00 UTC |
	|         | multinode-20210817022620-1554185-m03                          |                                             |         |         |                               |                               |
	| delete  | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:38:00 UTC | Tue, 17 Aug 2021 02:38:04 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	| start   | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:39:35 UTC | Tue, 17 Aug 2021 02:40:42 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	|         | --memory=2048 --driver=docker                                 |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| stop    | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:40:42 UTC | Tue, 17 Aug 2021 02:40:42 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	|         | --cancel-scheduled                                            |                                             |         |         |                               |                               |
	| stop    | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:40:55 UTC | Tue, 17 Aug 2021 02:41:20 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	|         | --schedule 5s                                                 |                                             |         |         |                               |                               |
	| delete  | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:20 UTC | Tue, 17 Aug 2021 02:41:25 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	| delete  | -p                                                            | insufficient-storage-20210817024125-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:42 UTC | Tue, 17 Aug 2021 02:41:48 UTC |
	|         | insufficient-storage-20210817024125-1554185                   |                                             |         |         |                               |                               |
	| delete  | -p                                                            | missing-upgrade-20210817024148-1554185      | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:43:02 UTC | Tue, 17 Aug 2021 02:43:07 UTC |
	|         | missing-upgrade-20210817024148-1554185                        |                                             |         |         |                               |                               |
	| start   | -p                                                            | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:48 UTC | Tue, 17 Aug 2021 02:43:55 UTC |
	|         | pause-20210817024148-1554185                                  |                                             |         |         |                               |                               |
	|         | --memory=2048                                                 |                                             |         |         |                               |                               |
	|         | --install-addons=false                                        |                                             |         |         |                               |                               |
	|         | --wait=all --driver=docker                                    |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| start   | -p                                                            | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:43:55 UTC | Tue, 17 Aug 2021 02:44:12 UTC |
	|         | pause-20210817024148-1554185                                  |                                             |         |         |                               |                               |
	|         | --alsologtostderr                                             |                                             |         |         |                               |                               |
	|         | -v=1 --driver=docker                                          |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	|---------|---------------------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 02:43:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 02:43:55.935620 1662846 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:43:55.935723 1662846 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:43:55.935738 1662846 out.go:311] Setting ErrFile to fd 2...
	I0817 02:43:55.935766 1662846 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:43:55.935946 1662846 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:43:55.936251 1662846 out.go:305] Setting JSON to false
	I0817 02:43:55.937622 1662846 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37574,"bootTime":1629130662,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 02:43:55.937719 1662846 start.go:121] virtualization:  
	I0817 02:43:55.939817 1662846 out.go:177] * [pause-20210817024148-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 02:43:55.941669 1662846 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 02:43:55.940702 1662846 notify.go:169] Checking for updates...
	I0817 02:43:55.943679 1662846 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:43:55.945437 1662846 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 02:43:55.946802 1662846 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 02:43:55.947210 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:43:55.947633 1662846 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 02:43:56.027823 1662846 docker.go:132] docker version: linux-20.10.8
	I0817 02:43:56.027923 1662846 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:43:56.176370 1662846 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-17 02:43:56.090848407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:43:56.176502 1662846 docker.go:244] overlay module found
	I0817 02:43:56.179749 1662846 out.go:177] * Using the docker driver based on existing profile
	I0817 02:43:56.179775 1662846 start.go:278] selected driver: docker
	I0817 02:43:56.179782 1662846 start.go:751] validating driver "docker" against &{Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:43:56.179866 1662846 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 02:43:56.179980 1662846 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:43:56.292126 1662846 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-17 02:43:56.216837922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:43:56.292468 1662846 cni.go:93] Creating CNI manager for ""
	I0817 02:43:56.292486 1662846 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:43:56.292501 1662846 start_flags.go:277] config:
	{Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:43:56.294520 1662846 out.go:177] * Starting control plane node pause-20210817024148-1554185 in cluster pause-20210817024148-1554185
	I0817 02:43:56.294554 1662846 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 02:43:56.296008 1662846 out.go:177] * Pulling base image ...
	I0817 02:43:56.296031 1662846 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:43:56.296059 1662846 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 02:43:56.296077 1662846 cache.go:56] Caching tarball of preloaded images
	I0817 02:43:56.296206 1662846 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 02:43:56.296231 1662846 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 02:43:56.296337 1662846 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/config.json ...
	I0817 02:43:56.296506 1662846 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 02:43:56.358839 1662846 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 02:43:56.358863 1662846 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 02:43:56.358876 1662846 cache.go:205] Successfully downloaded all kic artifacts
	I0817 02:43:56.358910 1662846 start.go:313] acquiring machines lock for pause-20210817024148-1554185: {Name:mk43ad0c6625870b459afd5900940b78473b954e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 02:43:56.358994 1662846 start.go:317] acquired machines lock for "pause-20210817024148-1554185" in 57.583µs
	I0817 02:43:56.359016 1662846 start.go:93] Skipping create...Using existing machine configuration
	I0817 02:43:56.359025 1662846 fix.go:55] fixHost starting: 
	I0817 02:43:56.359303 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:43:56.396845 1662846 fix.go:108] recreateIfNeeded on pause-20210817024148-1554185: state=Running err=<nil>
	W0817 02:43:56.396879 1662846 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 02:43:56.399120 1662846 out.go:177] * Updating the running docker "pause-20210817024148-1554185" container ...
	I0817 02:43:56.399143 1662846 machine.go:88] provisioning docker machine ...
	I0817 02:43:56.399156 1662846 ubuntu.go:169] provisioning hostname "pause-20210817024148-1554185"
	I0817 02:43:56.399223 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:56.460270 1662846 main.go:130] libmachine: Using SSH client type: native
	I0817 02:43:56.460437 1662846 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50410 <nil> <nil>}
	I0817 02:43:56.460450 1662846 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210817024148-1554185 && echo "pause-20210817024148-1554185" | sudo tee /etc/hostname
	I0817 02:43:56.592739 1662846 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210817024148-1554185
	
	I0817 02:43:56.592882 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:56.636319 1662846 main.go:130] libmachine: Using SSH client type: native
	I0817 02:43:56.636499 1662846 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50410 <nil> <nil>}
	I0817 02:43:56.636520 1662846 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210817024148-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210817024148-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210817024148-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 02:43:56.775961 1662846 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 02:43:56.775984 1662846 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 02:43:56.776018 1662846 ubuntu.go:177] setting up certificates
	I0817 02:43:56.776029 1662846 provision.go:83] configureAuth start
	I0817 02:43:56.776079 1662846 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210817024148-1554185
	I0817 02:43:56.827590 1662846 provision.go:138] copyHostCerts
	I0817 02:43:56.827646 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 02:43:56.827654 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 02:43:56.827713 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 02:43:56.827792 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 02:43:56.827799 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 02:43:56.827820 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 02:43:56.827872 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 02:43:56.827880 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 02:43:56.827900 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 02:43:56.827946 1662846 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.pause-20210817024148-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210817024148-1554185]
	I0817 02:43:57.691741 1662846 provision.go:172] copyRemoteCerts
	I0817 02:43:57.691838 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 02:43:57.691973 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:57.733998 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:57.822192 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 02:43:57.856857 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 02:43:57.883796 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 02:43:57.906578 1662846 provision.go:86] duration metric: configureAuth took 1.130540743s
	I0817 02:43:57.906595 1662846 ubuntu.go:193] setting minikube options for container-runtime
	I0817 02:43:57.906755 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:43:57.906762 1662846 machine.go:91] provisioned docker machine in 1.507614043s
	I0817 02:43:57.906767 1662846 start.go:267] post-start starting for "pause-20210817024148-1554185" (driver="docker")
	I0817 02:43:57.906773 1662846 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 02:43:57.906827 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 02:43:57.906865 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:57.948809 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.037379 1662846 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 02:43:58.040800 1662846 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 02:43:58.040823 1662846 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 02:43:58.040834 1662846 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 02:43:58.040841 1662846 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 02:43:58.040851 1662846 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 02:43:58.040896 1662846 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 02:43:58.040978 1662846 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 02:43:58.041076 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 02:43:58.047645 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:43:58.063186 1662846 start.go:270] post-start completed in 156.406695ms
	I0817 02:43:58.063236 1662846 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 02:43:58.063283 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.095312 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.179965 1662846 fix.go:57] fixHost completed within 1.82093535s
	I0817 02:43:58.179990 1662846 start.go:80] releasing machines lock for "pause-20210817024148-1554185", held for 1.820983908s
	I0817 02:43:58.180071 1662846 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210817024148-1554185
	I0817 02:43:58.213692 1662846 ssh_runner.go:149] Run: systemctl --version
	I0817 02:43:58.213738 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.213787 1662846 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 02:43:58.213879 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.294808 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.305791 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.579632 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 02:43:58.595650 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 02:43:58.606604 1662846 docker.go:153] disabling docker service ...
	I0817 02:43:58.606667 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 02:43:58.617075 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 02:43:58.626385 1662846 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 02:43:58.766845 1662846 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 02:43:58.893614 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 02:43:58.903792 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 02:43:58.915967 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 02:43:58.928706 1662846 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 02:43:58.935023 1662846 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 02:43:58.941385 1662846 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 02:43:59.052351 1662846 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 02:43:59.207573 1662846 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 02:43:59.207636 1662846 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 02:43:59.211818 1662846 start.go:413] Will wait 60s for crictl version
	I0817 02:43:59.211935 1662846 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:43:59.253079 1662846 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T02:43:59Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 02:44:02.120599 1660780 out.go:204]   - Configuring RBAC rules ...
	I0817 02:44:02.550006 1660780 cni.go:93] Creating CNI manager for ""
	I0817 02:44:02.550033 1660780 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:44:02.551994 1660780 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:44:02.552050 1660780 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:44:02.555689 1660780 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0817 02:44:02.555706 1660780 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:44:02.567554 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:44:03.257879 1660780 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:44:03.258050 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=kubernetes-upgrade-20210817024307-1554185 minikube.k8s.io/updated_at=2021_08_17T02_44_03_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:44:03.258163 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:44:03.421126 1660780 kubeadm.go:985] duration metric: took 163.181168ms to wait for elevateKubeSystemPrivileges.
	I0817 02:44:03.421159 1660780 ops.go:34] apiserver oom_adj: 16
	I0817 02:44:03.421165 1660780 ops.go:39] adjusting apiserver oom_adj to -10
	I0817 02:44:03.421175 1660780 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:44:03.435297 1660780 kubeadm.go:392] StartCluster complete in 20.942151429s
	I0817 02:44:03.435324 1660780 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:03.435396 1660780 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:44:03.436729 1660780 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:03.437591 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:03.960675 1660780 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20210817024307-1554185" rescaled to 1
	I0817 02:44:03.960735 1660780 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0817 02:44:03.962792 1660780 out.go:177] * Verifying Kubernetes components...
	I0817 02:44:03.962884 1660780 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:03.960776 1660780 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:44:03.961119 1660780 config.go:177] Loaded profile config "kubernetes-upgrade-20210817024307-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0817 02:44:03.961133 1660780 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 02:44:03.963065 1660780 addons.go:59] Setting storage-provisioner=true in profile "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963086 1660780 addons.go:135] Setting addon storage-provisioner=true in "kubernetes-upgrade-20210817024307-1554185"
	W0817 02:44:03.963092 1660780 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:44:03.963117 1660780 host.go:66] Checking if "kubernetes-upgrade-20210817024307-1554185" exists ...
	I0817 02:44:03.963609 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:03.963742 1660780 addons.go:59] Setting default-storageclass=true in profile "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963759 1660780 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963983 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:04.042068 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:04.045974 1660780 addons.go:135] Setting addon default-storageclass=true in "kubernetes-upgrade-20210817024307-1554185"
	W0817 02:44:04.045994 1660780 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:44:04.046020 1660780 host.go:66] Checking if "kubernetes-upgrade-20210817024307-1554185" exists ...
	I0817 02:44:04.046775 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:04.067540 1660780 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:44:04.067635 1660780 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:04.067644 1660780 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:44:04.067698 1660780 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817024307-1554185
	I0817 02:44:04.134515 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:04.135773 1660780 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:44:04.135809 1660780 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:04.135958 1660780 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 02:44:04.162945 1660780 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:04.162964 1660780 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:44:04.163014 1660780 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817024307-1554185
	I0817 02:44:04.169842 1660780 api_server.go:70] duration metric: took 209.062582ms to wait for apiserver process to appear ...
	I0817 02:44:04.169860 1660780 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:44:04.169869 1660780 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:44:04.202845 1660780 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0817 02:44:04.203126 1660780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50433 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210817024307-1554185/id_rsa Username:docker}
	I0817 02:44:04.205563 1660780 api_server.go:139] control plane version: v1.14.0
	I0817 02:44:04.205585 1660780 api_server.go:129] duration metric: took 35.719651ms to wait for apiserver health ...
	I0817 02:44:04.205593 1660780 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:44:04.225295 1660780 system_pods.go:59] 0 kube-system pods found
	I0817 02:44:04.225327 1660780 retry.go:31] will retry after 305.063636ms: only 0 pod(s) have shown up
	I0817 02:44:04.236904 1660780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50433 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210817024307-1554185/id_rsa Username:docker}
	I0817 02:44:04.357326 1660780 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:04.437896 1660780 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:04.532218 1660780 system_pods.go:59] 0 kube-system pods found
	I0817 02:44:04.532272 1660780 retry.go:31] will retry after 338.212508ms: only 0 pod(s) have shown up
	I0817 02:44:04.546195 1660780 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0817 02:44:04.758200 1660780 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 02:44:04.758230 1660780 addons.go:344] enableAddons completed in 797.092673ms
	I0817 02:44:04.873249 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:04.873282 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:04.873317 1660780 retry.go:31] will retry after 378.459802ms: only 1 pod(s) have shown up
	I0817 02:44:05.254768 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:05.254794 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:05.254805 1660780 retry.go:31] will retry after 469.882201ms: only 1 pod(s) have shown up
	I0817 02:44:05.727524 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:05.727553 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:05.727565 1660780 retry.go:31] will retry after 667.365439ms: only 1 pod(s) have shown up
	I0817 02:44:06.397213 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:06.397242 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:06.397268 1660780 retry.go:31] will retry after 597.243124ms: only 1 pod(s) have shown up
	I0817 02:44:06.996568 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:06.996597 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:06.996622 1660780 retry.go:31] will retry after 789.889932ms: only 1 pod(s) have shown up
	I0817 02:44:10.303855 1662846 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:44:10.329831 1662846 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 02:44:10.329883 1662846 ssh_runner.go:149] Run: containerd --version
	I0817 02:44:10.350547 1662846 ssh_runner.go:149] Run: containerd --version
	I0817 02:44:10.372464 1662846 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 02:44:10.372542 1662846 cli_runner.go:115] Run: docker network inspect pause-20210817024148-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 02:44:10.403413 1662846 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 02:44:10.406786 1662846 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:44:10.406887 1662846 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:44:10.432961 1662846 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:44:10.432979 1662846 containerd.go:517] Images already preloaded, skipping extraction
	I0817 02:44:10.433017 1662846 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:44:10.456152 1662846 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:44:10.456171 1662846 cache_images.go:74] Images are preloaded, skipping loading
	I0817 02:44:10.456212 1662846 ssh_runner.go:149] Run: sudo crictl info
	I0817 02:44:10.478011 1662846 cni.go:93] Creating CNI manager for ""
	I0817 02:44:10.478033 1662846 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:44:10.478056 1662846 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 02:44:10.478081 1662846 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210817024148-1554185 NodeName:pause-20210817024148-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 02:44:10.478244 1662846 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "pause-20210817024148-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 02:44:10.478333 1662846 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20210817024148-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 02:44:10.478387 1662846 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 02:44:10.484940 1662846 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 02:44:10.484985 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 02:44:10.490924 1662846 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (573 bytes)
	I0817 02:44:10.502660 1662846 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 02:44:10.513948 1662846 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2078 bytes)
	I0817 02:44:10.524832 1662846 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 02:44:10.527527 1662846 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185 for IP: 192.168.49.2
	I0817 02:44:10.527570 1662846 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 02:44:10.527589 1662846 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 02:44:10.527638 1662846 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.key
	I0817 02:44:10.527664 1662846 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.key.dd3b5fb2
	I0817 02:44:10.527684 1662846 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.key
	I0817 02:44:10.527782 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 02:44:10.527819 1662846 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 02:44:10.527834 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 02:44:10.527857 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 02:44:10.527884 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 02:44:10.527924 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 02:44:10.527974 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:44:10.529038 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 02:44:10.544784 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 02:44:10.559660 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 02:44:10.574785 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 02:44:10.590201 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 02:44:10.605037 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 02:44:10.623387 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 02:44:10.639037 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 02:44:10.654135 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 02:44:10.669090 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 02:44:10.684622 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 02:44:10.699670 1662846 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 02:44:10.710765 1662846 ssh_runner.go:149] Run: openssl version
	I0817 02:44:10.717019 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 02:44:10.724137 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.726900 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.726944 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.731588 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 02:44:10.737384 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 02:44:10.743565 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.746327 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.746375 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.750493 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 02:44:10.756191 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 02:44:10.762612 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.765540 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.765579 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.769923 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 02:44:10.775678 1662846 kubeadm.go:390] StartCluster: {Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:44:10.775762 1662846 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 02:44:10.775824 1662846 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:44:10.802431 1662846 cri.go:76] found id: "6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a"
	I0817 02:44:10.802447 1662846 cri.go:76] found id: "335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0"
	I0817 02:44:10.802453 1662846 cri.go:76] found id: "aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66"
	I0817 02:44:10.802457 1662846 cri.go:76] found id: "ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:10.802462 1662846 cri.go:76] found id: "f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367"
	I0817 02:44:10.802470 1662846 cri.go:76] found id: "fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583"
	I0817 02:44:10.802478 1662846 cri.go:76] found id: "63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08"
	I0817 02:44:10.802488 1662846 cri.go:76] found id: ""
	I0817 02:44:10.802522 1662846 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:44:10.836121 1662846 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","pid":1575,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0/rootfs","created":"2021-08-17T02:43:04.897293883Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","pid":919,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60/rootfs","created":"2021-08-17T02:42:39.314001317Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210817024148-1554185_ed0234c9cb81abc8dc5bbcdfaf787883"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","pid":1063,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08/rootfs","created":"2021-08-17T02:42:39.515581049Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id"
:"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","pid":1895,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a/rootfs","created":"2021-08-17T02:43:53.705600139Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","rootfs":"/run/containerd
/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb/rootfs","created":"2021-08-17T02:42:39.34752748Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210817024148-1554185_0718a393987d1be3d2ae606f942a3f97"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","pid":1512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa/rootfs","created":"2021-08-17T02:43:04.691568232Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"771e9a30f4bd
ab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-h6fvl_e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","pid":991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9/rootfs","created":"2021-08-17T02:42:39.402749597Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210817024148-1554185_a97ec150b358b2334cd33dc4c454d661"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aad2134f4047a0355037c106be1e03aab147a921
033b3042e86497dd8533ae66","pid":1576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66/rootfs","created":"2021-08-17T02:43:04.899987912Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","pid":1861,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec/rootfs","created":"2021-08-17T02:43:53.53831483Z",
"annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-bzchw_5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","pid":931,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9/rootfs","created":"2021-08-17T02:42:39.343864386Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210817024148-1554185_b3d802d137e
d3edc177474465272f732"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","pid":1505,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c/rootfs","created":"2021-08-17T02:43:04.691547802Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9lnwm_8b4c4c45-1613-49c8-9b03-e13120205af4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2","pid":1140,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c466
5f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/rootfs","created":"2021-08-17T02:42:39.62877376Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","pid":1098,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367/rootfs","created":"2021-08-17T02:42:39.549869273Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607d
f03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","pid":1061,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583/rootfs","created":"2021-08-17T02:42:39.510915707Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60"},"owner":"root"}]
	I0817 02:44:10.836356 1662846 cri.go:113] list returned 14 containers
	I0817 02:44:10.836368 1662846 cri.go:116] container: {ID:335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 Status:running}
	I0817 02:44:10.836387 1662846 cri.go:122] skipping {335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 running}: state = "running", want "paused"
	I0817 02:44:10.836402 1662846 cri.go:116] container: {ID:3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 Status:running}
	I0817 02:44:10.836408 1662846 cri.go:118] skipping 3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 - not in ps
	I0817 02:44:10.836413 1662846 cri.go:116] container: {ID:63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 Status:running}
	I0817 02:44:10.836423 1662846 cri.go:122] skipping {63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 running}: state = "running", want "paused"
	I0817 02:44:10.836429 1662846 cri.go:116] container: {ID:6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a Status:running}
	I0817 02:44:10.836439 1662846 cri.go:122] skipping {6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a running}: state = "running", want "paused"
	I0817 02:44:10.836444 1662846 cri.go:116] container: {ID:73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb Status:running}
	I0817 02:44:10.836454 1662846 cri.go:118] skipping 73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb - not in ps
	I0817 02:44:10.836458 1662846 cri.go:116] container: {ID:771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa Status:running}
	I0817 02:44:10.836463 1662846 cri.go:118] skipping 771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa - not in ps
	I0817 02:44:10.836467 1662846 cri.go:116] container: {ID:7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 Status:running}
	I0817 02:44:10.836477 1662846 cri.go:118] skipping 7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 - not in ps
	I0817 02:44:10.836481 1662846 cri.go:116] container: {ID:aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 Status:running}
	I0817 02:44:10.836493 1662846 cri.go:122] skipping {aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 running}: state = "running", want "paused"
	I0817 02:44:10.836499 1662846 cri.go:116] container: {ID:b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec Status:running}
	I0817 02:44:10.836512 1662846 cri.go:118] skipping b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec - not in ps
	I0817 02:44:10.836518 1662846 cri.go:116] container: {ID:bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 Status:running}
	I0817 02:44:10.836524 1662846 cri.go:118] skipping bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 - not in ps
	I0817 02:44:10.836528 1662846 cri.go:116] container: {ID:ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c Status:running}
	I0817 02:44:10.836533 1662846 cri.go:118] skipping ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c - not in ps
	I0817 02:44:10.836537 1662846 cri.go:116] container: {ID:ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 Status:running}
	I0817 02:44:10.836542 1662846 cri.go:122] skipping {ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 running}: state = "running", want "paused"
	I0817 02:44:10.836547 1662846 cri.go:116] container: {ID:f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 Status:running}
	I0817 02:44:10.836553 1662846 cri.go:122] skipping {f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 running}: state = "running", want "paused"
	I0817 02:44:10.836562 1662846 cri.go:116] container: {ID:fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 Status:running}
	I0817 02:44:10.836567 1662846 cri.go:122] skipping {fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 running}: state = "running", want "paused"
	I0817 02:44:10.836606 1662846 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 02:44:10.842672 1662846 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 02:44:10.842686 1662846 kubeadm.go:600] restartCluster start
	I0817 02:44:10.842722 1662846 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 02:44:10.848569 1662846 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:44:10.849505 1662846 kubeconfig.go:93] found "pause-20210817024148-1554185" server: "https://192.168.49.2:8443"
	I0817 02:44:10.850203 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:10.851914 1662846 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 02:44:10.861281 1662846 api_server.go:164] Checking apiserver status ...
	I0817 02:44:10.861344 1662846 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:10.872606 1662846 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1140/cgroup
	I0817 02:44:10.878948 1662846 api_server.go:180] apiserver freezer: "6:freezer:/docker/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/kubepods/burstable/poda97ec150b358b2334cd33dc4c454d661/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:10.879031 1662846 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/kubepods/burstable/poda97ec150b358b2334cd33dc4c454d661/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/freezer.state
	I0817 02:44:10.887614 1662846 api_server.go:202] freezer state: "THAWED"
	I0817 02:44:10.887653 1662846 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 02:44:10.897405 1662846 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 02:44:10.921793 1662846 system_pods.go:86] 7 kube-system pods found
	I0817 02:44:10.921826 1662846 system_pods.go:89] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:10.921833 1662846 system_pods.go:89] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:10.921837 1662846 system_pods.go:89] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:10.921846 1662846 system_pods.go:89] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:10.921851 1662846 system_pods.go:89] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:10.921860 1662846 system_pods.go:89] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:10.921864 1662846 system_pods.go:89] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:10.922656 1662846 api_server.go:139] control plane version: v1.21.3
	I0817 02:44:10.922674 1662846 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.49.2
	I0817 02:44:10.922683 1662846 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0817 02:44:10.922688 1662846 kubeadm.go:604] restartCluster took 79.997602ms
	I0817 02:44:10.922692 1662846 kubeadm.go:392] StartCluster complete in 147.020078ms
	I0817 02:44:10.922711 1662846 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:10.922795 1662846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:44:10.923814 1662846 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:10.924639 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:10.927764 1662846 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210817024148-1554185" rescaled to 1
	I0817 02:44:10.927819 1662846 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 02:44:10.929557 1662846 out.go:177] * Verifying Kubernetes components...
	I0817 02:44:10.929621 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:10.928056 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:44:10.928073 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:44:10.928083 1662846 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 02:44:10.929772 1662846 addons.go:59] Setting storage-provisioner=true in profile "pause-20210817024148-1554185"
	I0817 02:44:10.929796 1662846 addons.go:135] Setting addon storage-provisioner=true in "pause-20210817024148-1554185"
	W0817 02:44:10.929827 1662846 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:44:10.929865 1662846 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:10.930344 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:10.935094 1662846 addons.go:59] Setting default-storageclass=true in profile "pause-20210817024148-1554185"
	I0817 02:44:10.935122 1662846 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210817024148-1554185"
	I0817 02:44:10.935399 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:11.011181 1662846 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:44:11.011290 1662846 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:11.011301 1662846 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:44:11.011350 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:44:11.015016 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:11.019001 1662846 addons.go:135] Setting addon default-storageclass=true in "pause-20210817024148-1554185"
	W0817 02:44:11.019019 1662846 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:44:11.019042 1662846 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:11.019478 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:11.072649 1662846 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:11.072687 1662846 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:44:11.072739 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:44:11.092036 1662846 node_ready.go:35] waiting up to 6m0s for node "pause-20210817024148-1554185" to be "Ready" ...
	I0817 02:44:11.092329 1662846 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 02:44:11.095935 1662846 node_ready.go:49] node "pause-20210817024148-1554185" has status "Ready":"True"
	I0817 02:44:11.095950 1662846 node_ready.go:38] duration metric: took 3.885427ms waiting for node "pause-20210817024148-1554185" to be "Ready" ...
	I0817 02:44:11.095958 1662846 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:44:11.105426 1662846 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.115130 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:44:11.136809 1662846 pod_ready.go:92] pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.136824 1662846 pod_ready.go:81] duration metric: took 31.377737ms waiting for pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.136834 1662846 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.140355 1662846 pod_ready.go:92] pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.140372 1662846 pod_ready.go:81] duration metric: took 3.530681ms waiting for pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.140384 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.145229 1662846 pod_ready.go:92] pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.145269 1662846 pod_ready.go:81] duration metric: took 4.874316ms waiting for pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.145292 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.155084 1662846 pod_ready.go:92] pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.155097 1662846 pod_ready.go:81] duration metric: took 9.787982ms waiting for pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.155105 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6fvl" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.159276 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:44:11.210907 1662846 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:11.257270 1662846 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:11.502673 1662846 pod_ready.go:92] pod "kube-proxy-h6fvl" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.502728 1662846 pod_ready.go:81] duration metric: took 347.614714ms waiting for pod "kube-proxy-h6fvl" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.502752 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:07.789502 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:07.789527 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:07.789539 1660780 retry.go:31] will retry after 951.868007ms: only 1 pod(s) have shown up
	I0817 02:44:08.743829 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:08.743853 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:08.743868 1660780 retry.go:31] will retry after 1.341783893s: only 1 pod(s) have shown up
	I0817 02:44:10.088004 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:10.088035 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:10.088048 1660780 retry.go:31] will retry after 1.876813009s: only 1 pod(s) have shown up
	I0817 02:44:11.967374 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:11.967401 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:11.967413 1660780 retry.go:31] will retry after 2.6934314s: only 1 pod(s) have shown up
	I0817 02:44:11.600462 1662846 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 02:44:11.600486 1662846 addons.go:344] enableAddons completed in 672.404962ms
	I0817 02:44:11.900577 1662846 pod_ready.go:92] pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.900629 1662846 pod_ready.go:81] duration metric: took 397.857202ms waiting for pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.900649 1662846 pod_ready.go:38] duration metric: took 804.679453ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:44:11.900677 1662846 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:44:11.900739 1662846 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:11.914199 1662846 api_server.go:70] duration metric: took 986.33934ms to wait for apiserver process to appear ...
	I0817 02:44:11.914238 1662846 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:44:11.914267 1662846 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 02:44:11.922723 1662846 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 02:44:11.923486 1662846 api_server.go:139] control plane version: v1.21.3
	I0817 02:44:11.923532 1662846 api_server.go:129] duration metric: took 9.277267ms to wait for apiserver health ...
	I0817 02:44:11.923552 1662846 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:44:12.113654 1662846 system_pods.go:59] 8 kube-system pods found
	I0817 02:44:12.113686 1662846 system_pods.go:61] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:12.113692 1662846 system_pods.go:61] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:12.113696 1662846 system_pods.go:61] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:12.113701 1662846 system_pods.go:61] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:12.113735 1662846 system_pods.go:61] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:12.113747 1662846 system_pods.go:61] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:12.113754 1662846 system_pods.go:61] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:12.113767 1662846 system_pods.go:61] "storage-provisioner" [562918b9-84e2-4f7e-9a0a-70742893e39d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 02:44:12.113773 1662846 system_pods.go:74] duration metric: took 190.207086ms to wait for pod list to return data ...
	I0817 02:44:12.113797 1662846 default_sa.go:34] waiting for default service account to be created ...
	I0817 02:44:12.300796 1662846 default_sa.go:45] found service account: "default"
	I0817 02:44:12.300822 1662846 default_sa.go:55] duration metric: took 187.014117ms for default service account to be created ...
	I0817 02:44:12.300830 1662846 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 02:44:12.506751 1662846 system_pods.go:86] 8 kube-system pods found
	I0817 02:44:12.506786 1662846 system_pods.go:89] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:12.506793 1662846 system_pods.go:89] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:12.510790 1662846 system_pods.go:89] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:12.510805 1662846 system_pods.go:89] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:12.510832 1662846 system_pods.go:89] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:12.510838 1662846 system_pods.go:89] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:12.510844 1662846 system_pods.go:89] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:12.510849 1662846 system_pods.go:89] "storage-provisioner" [562918b9-84e2-4f7e-9a0a-70742893e39d] Running
	I0817 02:44:12.510855 1662846 system_pods.go:126] duration metric: took 210.020669ms to wait for k8s-apps to be running ...
	I0817 02:44:12.510862 1662846 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 02:44:12.510915 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:12.520616 1662846 system_svc.go:56] duration metric: took 9.75179ms WaitForService to wait for kubelet.
	I0817 02:44:12.520637 1662846 kubeadm.go:547] duration metric: took 1.592794882s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 02:44:12.520657 1662846 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:44:12.701568 1662846 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:44:12.701598 1662846 node_conditions.go:123] node cpu capacity is 2
	I0817 02:44:12.701610 1662846 node_conditions.go:105] duration metric: took 180.94709ms to run NodePressure ...
	I0817 02:44:12.701620 1662846 start.go:231] waiting for startup goroutines ...
	I0817 02:44:12.753251 1662846 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 02:44:12.756175 1662846 out.go:177] * Done! kubectl is now configured to use "pause-20210817024148-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	cf9fe43a28990       ba04bb24b9575       3 seconds ago        Running             storage-provisioner       0                   62792bf694eb6
	6f0de758f96ce       1a1f05a2cd7c2       21 seconds ago       Running             coredns                   0                   b107ef4ef1079
	335440e08b6b6       f37b7c809e5dc       About a minute ago   Running             kindnet-cni               0                   ec238e8d3a6b2
	aad2134f4047a       4ea38350a1beb       About a minute ago   Running             kube-proxy                0                   771e9a30f4bda
	ec4892b38d019       44a6d50ef170d       About a minute ago   Running             kube-apiserver            0                   7a53464dc6cc7
	f45a4f177814d       cb310ff289d79       About a minute ago   Running             kube-controller-manager   0                   73b440ce137c2
	fb735a50aaaf9       05b738aa1bc63       About a minute ago   Running             etcd                      0                   3daddbac69e62
	63836f8fc4c5a       31a3b96cefc1e       About a minute ago   Running             kube-scheduler            0                   bb150a03bb9cc
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 02:41:51 UTC, end at Tue 2021-08-17 02:44:15 UTC. --
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204239884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204253184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204285201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204412887Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204478118Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0x40003d0f60 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPlug
inConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.4.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204553079Z" level=info msg="Connect containerd service"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204613452Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205685425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205900471Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205941102Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 17 02:43:59 pause-20210817024148-1554185 systemd[1]: Started containerd container runtime.
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.207100615Z" level=info msg="containerd successfully booted in 0.049192s"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.211900478Z" level=info msg="Start subscribing containerd event"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.220448431Z" level=info msg="Start recovering state"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299199088Z" level=info msg="Start event monitor"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299328802Z" level=info msg="Start snapshots syncer"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299384194Z" level=info msg="Start cni network conf syncer"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299434393Z" level=info msg="Start streaming server"
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.892999886Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:562918b9-84e2-4f7e-9a0a-70742893e39d,Namespace:kube-system,Attempt:0,}"
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.920384792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d pid=2435
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.993666940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:562918b9-84e2-4f7e-9a0a-70742893e39d,Namespace:kube-system,Attempt:0,} returns sandbox id \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\""
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.996018212Z" level=info msg="CreateContainer within sandbox \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.024621543Z" level=info msg="CreateContainer within sandbox \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\""
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.025144015Z" level=info msg="StartContainer for \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\""
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.093587482Z" level=info msg="StartContainer for \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\" returns successfully"
	
	* 
	* ==> coredns [6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210817024148-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-20210817024148-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=pause-20210817024148-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T02_42_50_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 02:42:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210817024148-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 02:44:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:43:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    pause-20210817024148-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                1148c453-a7b1-434d-b3fe-0e100988f0a3
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-bzchw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     71s
	  kube-system                 etcd-pause-20210817024148-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         86s
	  kube-system                 kindnet-9lnwm                                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      71s
	  kube-system                 kube-apiserver-pause-20210817024148-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-controller-manager-pause-20210817024148-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-h6fvl                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-scheduler-pause-20210817024148-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  97s (x5 over 97s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x5 over 97s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x4 over 97s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientPID
	  Normal  Starting                 77s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s                kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s                kubelet     Node pause-20210817024148-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s                kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 70s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                27s                kubelet     Node pause-20210817024148-1554185 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> etcd [fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583] <==
	* 2021-08-17 02:42:39.573167 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/17 02:42:39 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 02:42:39.573411 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 02:42:40 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 02:42:40.459131 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 02:42:40.465042 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 02:42:40.465114 I | etcdserver: published {Name:pause-20210817024148-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 02:42:40.465227 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 02:42:40.465256 I | embed: ready to serve client requests
	2021-08-17 02:42:40.469660 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 02:42:40.469974 I | embed: ready to serve client requests
	2021-08-17 02:42:40.471130 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 02:42:49.193636 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:03.920345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:08.512078 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:18.511903 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:28.515027 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:38.512484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:48.512959 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:58.512373 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:44:08.511866 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  02:44:15 up 10:26,  0 users,  load average: 2.48, 1.72, 1.25
	Linux pause-20210817024148-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2] <==
	* I0817 02:42:46.856814       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 02:42:46.869670       1 controller.go:611] quota admission added evaluator for: namespaces
	I0817 02:42:46.901749       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0817 02:42:47.639727       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 02:42:47.639906       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 02:42:47.665038       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0817 02:42:47.669713       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0817 02:42:47.669739       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 02:42:48.274054       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 02:42:48.310293       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 02:42:48.398617       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0817 02:42:48.400201       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 02:42:48.403641       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 02:42:49.313095       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 02:42:49.844292       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 02:42:49.897658       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 02:42:58.279367       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 02:43:04.136440       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0817 02:43:04.199838       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0817 02:43:21.010651       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:43:21.010876       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:43:21.010973       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:43:51.301051       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:43:51.301091       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:43:51.301099       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367] <==
	* I0817 02:43:03.483116       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0817 02:43:03.483462       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0817 02:43:03.483847       1 event.go:291] "Event occurred" object="pause-20210817024148-1554185" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210817024148-1554185 event: Registered Node pause-20210817024148-1554185 in Controller"
	I0817 02:43:03.491024       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0817 02:43:03.525270       1 shared_informer.go:247] Caches are synced for HPA 
	I0817 02:43:03.531014       1 shared_informer.go:247] Caches are synced for attach detach 
	I0817 02:43:03.531095       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0817 02:43:03.531106       1 shared_informer.go:247] Caches are synced for endpoint 
	I0817 02:43:03.542828       1 shared_informer.go:247] Caches are synced for disruption 
	I0817 02:43:03.542888       1 disruption.go:371] Sending events to api server.
	I0817 02:43:03.543000       1 shared_informer.go:247] Caches are synced for job 
	I0817 02:43:03.543063       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0817 02:43:03.593684       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:43:03.657513       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:43:04.079136       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:43:04.079314       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 02:43:04.127990       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:43:04.138879       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0817 02:43:04.216419       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h6fvl"
	I0817 02:43:04.226354       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9lnwm"
	I0817 02:43:04.391394       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0817 02:43:04.400006       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-z7v6b"
	I0817 02:43:04.411923       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-bzchw"
	I0817 02:43:04.436039       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-z7v6b"
	I0817 02:43:48.489538       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66] <==
	* I0817 02:43:05.040138       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 02:43:05.040427       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 02:43:05.040569       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 02:43:05.066321       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 02:43:05.066475       1 server_others.go:212] Using iptables Proxier.
	I0817 02:43:05.066558       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 02:43:05.066632       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 02:43:05.067006       1 server.go:643] Version: v1.21.3
	I0817 02:43:05.067885       1 config.go:315] Starting service config controller
	I0817 02:43:05.068016       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 02:43:05.068105       1 config.go:224] Starting endpoint slice config controller
	I0817 02:43:05.068187       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 02:43:05.075717       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 02:43:05.079542       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 02:43:05.169159       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 02:43:05.169216       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08] <==
	* E0817 02:42:46.829872       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:42:46.829928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0817 02:42:46.830223       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 02:42:46.830599       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:42:46.830655       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 02:42:46.830711       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:42:46.830755       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:42:46.833525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.833686       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:42:46.833809       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.834107       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 02:42:46.840341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.843238       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 02:42:47.692486       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.720037       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:42:47.720290       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 02:42:47.763006       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.788725       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.934043       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:42:47.972589       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:42:47.976875       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 02:42:47.998841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:42:48.041214       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:42:48.197544       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 02:42:49.931904       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 02:41:51 UTC, end at Tue 2021-08-17 02:44:15 UTC. --
	Aug 17 02:43:04 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:04.379642    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe-xtables-lock\") pod \"kube-proxy-h6fvl\" (UID: \"e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe\") "
	Aug 17 02:43:04 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:04.379734    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdls7\" (UniqueName: \"kubernetes.io/projected/e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe-kube-api-access-rdls7\") pod \"kube-proxy-h6fvl\" (UID: \"e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe\") "
	Aug 17 02:43:04 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:04.379824    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe-kube-proxy\") pod \"kube-proxy-h6fvl\" (UID: \"e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe\") "
	Aug 17 02:43:08 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:08.432586    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:13 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:13.433632    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:18 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:18.434613    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:23 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:23.436047    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:28 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:28.437251    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:33 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:33.438786    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:53 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:53.010477    1188 topology_manager.go:187] "Topology Admit Handler"
	Aug 17 02:43:53 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:53.042619    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpt2g\" (UniqueName: \"kubernetes.io/projected/5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc-kube-api-access-tpt2g\") pod \"coredns-558bd4d5db-bzchw\" (UID: \"5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc\") "
	Aug 17 02:43:53 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:53.042805    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc-config-volume\") pod \"coredns-558bd4d5db-bzchw\" (UID: \"5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc\") "
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: W0817 02:43:59.062156    1188 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: W0817 02:43:59.062420    1188 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: W0817 02:43:59.162663    1188 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:59.464667    1188 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="nil"
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:59.465014    1188 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:59.465038    1188 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 17 02:44:11 pause-20210817024148-1554185 kubelet[1188]: I0817 02:44:11.555946    1188 topology_manager.go:187] "Topology Admit Handler"
	Aug 17 02:44:11 pause-20210817024148-1554185 kubelet[1188]: I0817 02:44:11.681239    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvv4f\" (UniqueName: \"kubernetes.io/projected/562918b9-84e2-4f7e-9a0a-70742893e39d-kube-api-access-vvv4f\") pod \"storage-provisioner\" (UID: \"562918b9-84e2-4f7e-9a0a-70742893e39d\") "
	Aug 17 02:44:11 pause-20210817024148-1554185 kubelet[1188]: I0817 02:44:11.681340    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/562918b9-84e2-4f7e-9a0a-70742893e39d-tmp\") pod \"storage-provisioner\" (UID: \"562918b9-84e2-4f7e-9a0a-70742893e39d\") "
	Aug 17 02:44:13 pause-20210817024148-1554185 kubelet[1188]: I0817 02:44:13.148324    1188 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 17 02:44:13 pause-20210817024148-1554185 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 17 02:44:13 pause-20210817024148-1554185 systemd[1]: kubelet.service: Succeeded.
	Aug 17 02:44:13 pause-20210817024148-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42] <==
	* I0817 02:44:12.092161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 02:44:12.117851       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 02:44:12.117933       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 02:44:12.144567       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 02:44:12.144687       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7dcc4aca-a2da-4802-9687-f8a1d81928d3", APIVersion:"v1", ResourceVersion:"515", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10 became leader
	I0817 02:44:12.145088       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10!
	I0817 02:44:12.245861       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185: exit status 2 (331.76787ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210817024148-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210817024148-1554185 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210817024148-1554185 describe pod : exit status 1 (58.855573ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210817024148-1554185 describe pod : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210817024148-1554185
helpers_test.go:236: (dbg) docker inspect pause-20210817024148-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b",
	        "Created": "2021-08-17T02:41:50.320902147Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1657229,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T02:41:51.004888651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/hosts",
	        "LogPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b-json.log",
	        "Name": "/pause-20210817024148-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-20210817024148-1554185:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210817024148-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20210817024148-1554185",
	                "Source": "/var/lib/docker/volumes/pause-20210817024148-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210817024148-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210817024148-1554185",
	                "name.minikube.sigs.k8s.io": "pause-20210817024148-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d8a8e6cd79da22a5765a51578b1ea6e8efa8e27c6c5dbb571e80d79023db3847",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50410"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50409"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50406"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50408"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50407"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d8a8e6cd79da",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210817024148-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b9b1ad2a3171",
	                        "pause-20210817024148-1554185"
	                    ],
	                    "NetworkID": "747733296a426a6f52daff293191c7fb9ea960ba5380b91809f97050286a1932",
	                    "EndpointID": "54b17c5460167eb93db2a6807c51835973c485175b87062600e587d432698b14",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185: exit status 2 (308.039789ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p pause-20210817024148-1554185 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p pause-20210817024148-1554185 logs -n 25: (1.235155723s)
helpers_test.go:253: TestPause/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                             Args                              |                   Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:47 UTC | Tue, 17 Aug 2021 02:29:48 UTC |
	|         | cp testdata/cp-test.txt                                       |                                             |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                      |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:48 UTC | Tue, 17 Aug 2021 02:29:48 UTC |
	|         | ssh sudo cat                                                  |                                             |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                      |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185 cp testdata/cp-test.txt      | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:48 UTC | Tue, 17 Aug 2021 02:29:48 UTC |
	|         | multinode-20210817022620-1554185-m02:/home/docker/cp-test.txt |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:48 UTC | Tue, 17 Aug 2021 02:29:48 UTC |
	|         | ssh -n                                                        |                                             |         |         |                               |                               |
	|         | multinode-20210817022620-1554185-m02                          |                                             |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                             |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185 cp testdata/cp-test.txt      | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:29:49 UTC |
	|         | multinode-20210817022620-1554185-m03:/home/docker/cp-test.txt |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:29:49 UTC |
	|         | ssh -n                                                        |                                             |         |         |                               |                               |
	|         | multinode-20210817022620-1554185-m03                          |                                             |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                             |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:30:09 UTC |
	|         | node stop m03                                                 |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:30:10 UTC | Tue, 17 Aug 2021 02:30:40 UTC |
	|         | node start m03 --alsologtostderr                              |                                             |         |         |                               |                               |
	| stop    | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:30:41 UTC | Tue, 17 Aug 2021 02:31:41 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	| start   | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:31:41 UTC | Tue, 17 Aug 2021 02:34:02 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	|         | --wait=true -v=8                                              |                                             |         |         |                               |                               |
	|         | --alsologtostderr                                             |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:34:02 UTC | Tue, 17 Aug 2021 02:34:26 UTC |
	|         | node delete m03                                               |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:34:27 UTC | Tue, 17 Aug 2021 02:35:07 UTC |
	|         | stop                                                          |                                             |         |         |                               |                               |
	| start   | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:35:07 UTC | Tue, 17 Aug 2021 02:36:46 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	|         | --wait=true -v=8                                              |                                             |         |         |                               |                               |
	|         | --alsologtostderr                                             |                                             |         |         |                               |                               |
	|         | --driver=docker                                               |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| start   | -p                                                            | multinode-20210817022620-1554185-m03        | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:36:47 UTC | Tue, 17 Aug 2021 02:37:57 UTC |
	|         | multinode-20210817022620-1554185-m03                          |                                             |         |         |                               |                               |
	|         | --driver=docker                                               |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| delete  | -p                                                            | multinode-20210817022620-1554185-m03        | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:37:57 UTC | Tue, 17 Aug 2021 02:38:00 UTC |
	|         | multinode-20210817022620-1554185-m03                          |                                             |         |         |                               |                               |
	| delete  | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:38:00 UTC | Tue, 17 Aug 2021 02:38:04 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	| start   | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:39:35 UTC | Tue, 17 Aug 2021 02:40:42 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	|         | --memory=2048 --driver=docker                                 |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| stop    | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:40:42 UTC | Tue, 17 Aug 2021 02:40:42 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	|         | --cancel-scheduled                                            |                                             |         |         |                               |                               |
	| stop    | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:40:55 UTC | Tue, 17 Aug 2021 02:41:20 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	|         | --schedule 5s                                                 |                                             |         |         |                               |                               |
	| delete  | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:20 UTC | Tue, 17 Aug 2021 02:41:25 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	| delete  | -p                                                            | insufficient-storage-20210817024125-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:42 UTC | Tue, 17 Aug 2021 02:41:48 UTC |
	|         | insufficient-storage-20210817024125-1554185                   |                                             |         |         |                               |                               |
	| delete  | -p                                                            | missing-upgrade-20210817024148-1554185      | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:43:02 UTC | Tue, 17 Aug 2021 02:43:07 UTC |
	|         | missing-upgrade-20210817024148-1554185                        |                                             |         |         |                               |                               |
	| start   | -p                                                            | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:48 UTC | Tue, 17 Aug 2021 02:43:55 UTC |
	|         | pause-20210817024148-1554185                                  |                                             |         |         |                               |                               |
	|         | --memory=2048                                                 |                                             |         |         |                               |                               |
	|         | --install-addons=false                                        |                                             |         |         |                               |                               |
	|         | --wait=all --driver=docker                                    |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| start   | -p                                                            | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:43:55 UTC | Tue, 17 Aug 2021 02:44:12 UTC |
	|         | pause-20210817024148-1554185                                  |                                             |         |         |                               |                               |
	|         | --alsologtostderr                                             |                                             |         |         |                               |                               |
	|         | -v=1 --driver=docker                                          |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| -p      | pause-20210817024148-1554185                                  | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:15 UTC | Tue, 17 Aug 2021 02:44:15 UTC |
	|         | logs -n 25                                                    |                                             |         |         |                               |                               |
	|---------|---------------------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 02:43:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 02:43:55.935620 1662846 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:43:55.935723 1662846 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:43:55.935738 1662846 out.go:311] Setting ErrFile to fd 2...
	I0817 02:43:55.935766 1662846 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:43:55.935946 1662846 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:43:55.936251 1662846 out.go:305] Setting JSON to false
	I0817 02:43:55.937622 1662846 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37574,"bootTime":1629130662,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 02:43:55.937719 1662846 start.go:121] virtualization:  
	I0817 02:43:55.939817 1662846 out.go:177] * [pause-20210817024148-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 02:43:55.941669 1662846 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 02:43:55.940702 1662846 notify.go:169] Checking for updates...
	I0817 02:43:55.943679 1662846 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:43:55.945437 1662846 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 02:43:55.946802 1662846 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 02:43:55.947210 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:43:55.947633 1662846 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 02:43:56.027823 1662846 docker.go:132] docker version: linux-20.10.8
	I0817 02:43:56.027923 1662846 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:43:56.176370 1662846 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-17 02:43:56.090848407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:43:56.176502 1662846 docker.go:244] overlay module found
	I0817 02:43:56.179749 1662846 out.go:177] * Using the docker driver based on existing profile
	I0817 02:43:56.179775 1662846 start.go:278] selected driver: docker
	I0817 02:43:56.179782 1662846 start.go:751] validating driver "docker" against &{Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:43:56.179866 1662846 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 02:43:56.179980 1662846 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:43:56.292126 1662846 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-17 02:43:56.216837922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:43:56.292468 1662846 cni.go:93] Creating CNI manager for ""
	I0817 02:43:56.292486 1662846 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:43:56.292501 1662846 start_flags.go:277] config:
	{Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:43:56.294520 1662846 out.go:177] * Starting control plane node pause-20210817024148-1554185 in cluster pause-20210817024148-1554185
	I0817 02:43:56.294554 1662846 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 02:43:56.296008 1662846 out.go:177] * Pulling base image ...
	I0817 02:43:56.296031 1662846 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:43:56.296059 1662846 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 02:43:56.296077 1662846 cache.go:56] Caching tarball of preloaded images
	I0817 02:43:56.296206 1662846 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 02:43:56.296231 1662846 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 02:43:56.296337 1662846 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/config.json ...
	I0817 02:43:56.296506 1662846 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 02:43:56.358839 1662846 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 02:43:56.358863 1662846 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 02:43:56.358876 1662846 cache.go:205] Successfully downloaded all kic artifacts
	I0817 02:43:56.358910 1662846 start.go:313] acquiring machines lock for pause-20210817024148-1554185: {Name:mk43ad0c6625870b459afd5900940b78473b954e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 02:43:56.358994 1662846 start.go:317] acquired machines lock for "pause-20210817024148-1554185" in 57.583µs
	I0817 02:43:56.359016 1662846 start.go:93] Skipping create...Using existing machine configuration
	I0817 02:43:56.359025 1662846 fix.go:55] fixHost starting: 
	I0817 02:43:56.359303 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:43:56.396845 1662846 fix.go:108] recreateIfNeeded on pause-20210817024148-1554185: state=Running err=<nil>
	W0817 02:43:56.396879 1662846 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 02:43:56.399120 1662846 out.go:177] * Updating the running docker "pause-20210817024148-1554185" container ...
	I0817 02:43:56.399143 1662846 machine.go:88] provisioning docker machine ...
	I0817 02:43:56.399156 1662846 ubuntu.go:169] provisioning hostname "pause-20210817024148-1554185"
	I0817 02:43:56.399223 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:56.460270 1662846 main.go:130] libmachine: Using SSH client type: native
	I0817 02:43:56.460437 1662846 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50410 <nil> <nil>}
	I0817 02:43:56.460450 1662846 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210817024148-1554185 && echo "pause-20210817024148-1554185" | sudo tee /etc/hostname
	I0817 02:43:56.592739 1662846 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210817024148-1554185
	
	I0817 02:43:56.592882 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:56.636319 1662846 main.go:130] libmachine: Using SSH client type: native
	I0817 02:43:56.636499 1662846 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50410 <nil> <nil>}
	I0817 02:43:56.636520 1662846 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210817024148-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210817024148-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210817024148-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 02:43:56.775961 1662846 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 02:43:56.775984 1662846 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 02:43:56.776018 1662846 ubuntu.go:177] setting up certificates
	I0817 02:43:56.776029 1662846 provision.go:83] configureAuth start
	I0817 02:43:56.776079 1662846 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210817024148-1554185
	I0817 02:43:56.827590 1662846 provision.go:138] copyHostCerts
	I0817 02:43:56.827646 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 02:43:56.827654 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 02:43:56.827713 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 02:43:56.827792 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 02:43:56.827799 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 02:43:56.827820 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 02:43:56.827872 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 02:43:56.827880 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 02:43:56.827900 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 02:43:56.827946 1662846 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.pause-20210817024148-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210817024148-1554185]
	I0817 02:43:57.691741 1662846 provision.go:172] copyRemoteCerts
	I0817 02:43:57.691838 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 02:43:57.691973 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:57.733998 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:57.822192 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 02:43:57.856857 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 02:43:57.883796 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 02:43:57.906578 1662846 provision.go:86] duration metric: configureAuth took 1.130540743s
	I0817 02:43:57.906595 1662846 ubuntu.go:193] setting minikube options for container-runtime
	I0817 02:43:57.906755 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:43:57.906762 1662846 machine.go:91] provisioned docker machine in 1.507614043s
	I0817 02:43:57.906767 1662846 start.go:267] post-start starting for "pause-20210817024148-1554185" (driver="docker")
	I0817 02:43:57.906773 1662846 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 02:43:57.906827 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 02:43:57.906865 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:57.948809 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.037379 1662846 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 02:43:58.040800 1662846 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 02:43:58.040823 1662846 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 02:43:58.040834 1662846 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 02:43:58.040841 1662846 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 02:43:58.040851 1662846 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 02:43:58.040896 1662846 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 02:43:58.040978 1662846 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 02:43:58.041076 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 02:43:58.047645 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:43:58.063186 1662846 start.go:270] post-start completed in 156.406695ms
	I0817 02:43:58.063236 1662846 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 02:43:58.063283 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.095312 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.179965 1662846 fix.go:57] fixHost completed within 1.82093535s
	I0817 02:43:58.179990 1662846 start.go:80] releasing machines lock for "pause-20210817024148-1554185", held for 1.820983908s
	I0817 02:43:58.180071 1662846 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210817024148-1554185
	I0817 02:43:58.213692 1662846 ssh_runner.go:149] Run: systemctl --version
	I0817 02:43:58.213738 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.213787 1662846 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 02:43:58.213879 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.294808 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.305791 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.579632 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 02:43:58.595650 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 02:43:58.606604 1662846 docker.go:153] disabling docker service ...
	I0817 02:43:58.606667 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 02:43:58.617075 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 02:43:58.626385 1662846 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 02:43:58.766845 1662846 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 02:43:58.893614 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 02:43:58.903792 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 02:43:58.915967 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 02:43:58.928706 1662846 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 02:43:58.935023 1662846 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 02:43:58.941385 1662846 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 02:43:59.052351 1662846 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 02:43:59.207573 1662846 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 02:43:59.207636 1662846 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 02:43:59.211818 1662846 start.go:413] Will wait 60s for crictl version
	I0817 02:43:59.211935 1662846 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:43:59.253079 1662846 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T02:43:59Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 02:44:02.120599 1660780 out.go:204]   - Configuring RBAC rules ...
	I0817 02:44:02.550006 1660780 cni.go:93] Creating CNI manager for ""
	I0817 02:44:02.550033 1660780 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:44:02.551994 1660780 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:44:02.552050 1660780 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:44:02.555689 1660780 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0817 02:44:02.555706 1660780 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:44:02.567554 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:44:03.257879 1660780 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:44:03.258050 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=kubernetes-upgrade-20210817024307-1554185 minikube.k8s.io/updated_at=2021_08_17T02_44_03_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:44:03.258163 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:44:03.421126 1660780 kubeadm.go:985] duration metric: took 163.181168ms to wait for elevateKubeSystemPrivileges.
	I0817 02:44:03.421159 1660780 ops.go:34] apiserver oom_adj: 16
	I0817 02:44:03.421165 1660780 ops.go:39] adjusting apiserver oom_adj to -10
	I0817 02:44:03.421175 1660780 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:44:03.435297 1660780 kubeadm.go:392] StartCluster complete in 20.942151429s
	I0817 02:44:03.435324 1660780 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:03.435396 1660780 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:44:03.436729 1660780 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:03.437591 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:03.960675 1660780 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20210817024307-1554185" rescaled to 1
	I0817 02:44:03.960735 1660780 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0817 02:44:03.962792 1660780 out.go:177] * Verifying Kubernetes components...
	I0817 02:44:03.962884 1660780 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:03.960776 1660780 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:44:03.961119 1660780 config.go:177] Loaded profile config "kubernetes-upgrade-20210817024307-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0817 02:44:03.961133 1660780 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 02:44:03.963065 1660780 addons.go:59] Setting storage-provisioner=true in profile "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963086 1660780 addons.go:135] Setting addon storage-provisioner=true in "kubernetes-upgrade-20210817024307-1554185"
	W0817 02:44:03.963092 1660780 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:44:03.963117 1660780 host.go:66] Checking if "kubernetes-upgrade-20210817024307-1554185" exists ...
	I0817 02:44:03.963609 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:03.963742 1660780 addons.go:59] Setting default-storageclass=true in profile "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963759 1660780 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963983 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:04.042068 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:04.045974 1660780 addons.go:135] Setting addon default-storageclass=true in "kubernetes-upgrade-20210817024307-1554185"
	W0817 02:44:04.045994 1660780 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:44:04.046020 1660780 host.go:66] Checking if "kubernetes-upgrade-20210817024307-1554185" exists ...
	I0817 02:44:04.046775 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:04.067540 1660780 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:44:04.067635 1660780 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:04.067644 1660780 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:44:04.067698 1660780 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817024307-1554185
	I0817 02:44:04.134515 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:04.135773 1660780 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:44:04.135809 1660780 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:04.135958 1660780 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 02:44:04.162945 1660780 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:04.162964 1660780 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:44:04.163014 1660780 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817024307-1554185
	I0817 02:44:04.169842 1660780 api_server.go:70] duration metric: took 209.062582ms to wait for apiserver process to appear ...
	I0817 02:44:04.169860 1660780 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:44:04.169869 1660780 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:44:04.202845 1660780 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0817 02:44:04.203126 1660780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50433 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210817024307-1554185/id_rsa Username:docker}
	I0817 02:44:04.205563 1660780 api_server.go:139] control plane version: v1.14.0
	I0817 02:44:04.205585 1660780 api_server.go:129] duration metric: took 35.719651ms to wait for apiserver health ...
	I0817 02:44:04.205593 1660780 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:44:04.225295 1660780 system_pods.go:59] 0 kube-system pods found
	I0817 02:44:04.225327 1660780 retry.go:31] will retry after 305.063636ms: only 0 pod(s) have shown up
	I0817 02:44:04.236904 1660780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50433 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210817024307-1554185/id_rsa Username:docker}
	I0817 02:44:04.357326 1660780 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:04.437896 1660780 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:04.532218 1660780 system_pods.go:59] 0 kube-system pods found
	I0817 02:44:04.532272 1660780 retry.go:31] will retry after 338.212508ms: only 0 pod(s) have shown up
	I0817 02:44:04.546195 1660780 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0817 02:44:04.758200 1660780 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 02:44:04.758230 1660780 addons.go:344] enableAddons completed in 797.092673ms
	I0817 02:44:04.873249 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:04.873282 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:04.873317 1660780 retry.go:31] will retry after 378.459802ms: only 1 pod(s) have shown up
	I0817 02:44:05.254768 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:05.254794 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:05.254805 1660780 retry.go:31] will retry after 469.882201ms: only 1 pod(s) have shown up
	I0817 02:44:05.727524 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:05.727553 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:05.727565 1660780 retry.go:31] will retry after 667.365439ms: only 1 pod(s) have shown up
	I0817 02:44:06.397213 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:06.397242 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:06.397268 1660780 retry.go:31] will retry after 597.243124ms: only 1 pod(s) have shown up
	I0817 02:44:06.996568 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:06.996597 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:06.996622 1660780 retry.go:31] will retry after 789.889932ms: only 1 pod(s) have shown up
	I0817 02:44:10.303855 1662846 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:44:10.329831 1662846 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 02:44:10.329883 1662846 ssh_runner.go:149] Run: containerd --version
	I0817 02:44:10.350547 1662846 ssh_runner.go:149] Run: containerd --version
	I0817 02:44:10.372464 1662846 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 02:44:10.372542 1662846 cli_runner.go:115] Run: docker network inspect pause-20210817024148-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 02:44:10.403413 1662846 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 02:44:10.406786 1662846 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:44:10.406887 1662846 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:44:10.432961 1662846 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:44:10.432979 1662846 containerd.go:517] Images already preloaded, skipping extraction
	I0817 02:44:10.433017 1662846 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:44:10.456152 1662846 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:44:10.456171 1662846 cache_images.go:74] Images are preloaded, skipping loading
	I0817 02:44:10.456212 1662846 ssh_runner.go:149] Run: sudo crictl info
	I0817 02:44:10.478011 1662846 cni.go:93] Creating CNI manager for ""
	I0817 02:44:10.478033 1662846 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:44:10.478056 1662846 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 02:44:10.478081 1662846 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210817024148-1554185 NodeName:pause-20210817024148-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 02:44:10.478244 1662846 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "pause-20210817024148-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 02:44:10.478333 1662846 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20210817024148-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 02:44:10.478387 1662846 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 02:44:10.484940 1662846 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 02:44:10.484985 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 02:44:10.490924 1662846 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (573 bytes)
	I0817 02:44:10.502660 1662846 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 02:44:10.513948 1662846 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2078 bytes)
	I0817 02:44:10.524832 1662846 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 02:44:10.527527 1662846 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185 for IP: 192.168.49.2
	I0817 02:44:10.527570 1662846 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 02:44:10.527589 1662846 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 02:44:10.527638 1662846 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.key
	I0817 02:44:10.527664 1662846 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.key.dd3b5fb2
	I0817 02:44:10.527684 1662846 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.key
	I0817 02:44:10.527782 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 02:44:10.527819 1662846 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 02:44:10.527834 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 02:44:10.527857 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 02:44:10.527884 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 02:44:10.527924 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 02:44:10.527974 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:44:10.529038 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 02:44:10.544784 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 02:44:10.559660 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 02:44:10.574785 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 02:44:10.590201 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 02:44:10.605037 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 02:44:10.623387 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 02:44:10.639037 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 02:44:10.654135 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 02:44:10.669090 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 02:44:10.684622 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 02:44:10.699670 1662846 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 02:44:10.710765 1662846 ssh_runner.go:149] Run: openssl version
	I0817 02:44:10.717019 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 02:44:10.724137 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.726900 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.726944 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.731588 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 02:44:10.737384 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 02:44:10.743565 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.746327 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.746375 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.750493 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 02:44:10.756191 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 02:44:10.762612 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.765540 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.765579 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.769923 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 02:44:10.775678 1662846 kubeadm.go:390] StartCluster: {Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:44:10.775762 1662846 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 02:44:10.775824 1662846 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:44:10.802431 1662846 cri.go:76] found id: "6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a"
	I0817 02:44:10.802447 1662846 cri.go:76] found id: "335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0"
	I0817 02:44:10.802453 1662846 cri.go:76] found id: "aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66"
	I0817 02:44:10.802457 1662846 cri.go:76] found id: "ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:10.802462 1662846 cri.go:76] found id: "f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367"
	I0817 02:44:10.802470 1662846 cri.go:76] found id: "fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583"
	I0817 02:44:10.802478 1662846 cri.go:76] found id: "63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08"
	I0817 02:44:10.802488 1662846 cri.go:76] found id: ""
	I0817 02:44:10.802522 1662846 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:44:10.836121 1662846 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","pid":1575,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0/rootfs","created":"2021-08-17T02:43:04.897293883Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","pid":919,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60/rootfs","created":"2021-08-17T02:42:39.314001317Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210817024148-1554185_ed0234c9cb81abc8dc5bbcdfaf787883"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","pid":1063,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08/rootfs","created":"2021-08-17T02:42:39.515581049Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id"
:"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","pid":1895,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a/rootfs","created":"2021-08-17T02:43:53.705600139Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","rootfs":"/run/containerd
/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb/rootfs","created":"2021-08-17T02:42:39.34752748Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210817024148-1554185_0718a393987d1be3d2ae606f942a3f97"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","pid":1512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa/rootfs","created":"2021-08-17T02:43:04.691568232Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"771e9a30f4bd
ab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-h6fvl_e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","pid":991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9/rootfs","created":"2021-08-17T02:42:39.402749597Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210817024148-1554185_a97ec150b358b2334cd33dc4c454d661"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aad2134f4047a0355037c106be1e03aab147a921
033b3042e86497dd8533ae66","pid":1576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66/rootfs","created":"2021-08-17T02:43:04.899987912Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","pid":1861,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec/rootfs","created":"2021-08-17T02:43:53.53831483Z",
"annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-bzchw_5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","pid":931,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9/rootfs","created":"2021-08-17T02:42:39.343864386Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210817024148-1554185_b3d802d137e
d3edc177474465272f732"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","pid":1505,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c/rootfs","created":"2021-08-17T02:43:04.691547802Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9lnwm_8b4c4c45-1613-49c8-9b03-e13120205af4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2","pid":1140,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c466
5f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/rootfs","created":"2021-08-17T02:42:39.62877376Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","pid":1098,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367/rootfs","created":"2021-08-17T02:42:39.549869273Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607d
f03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","pid":1061,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583/rootfs","created":"2021-08-17T02:42:39.510915707Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60"},"owner":"root"}]
	I0817 02:44:10.836356 1662846 cri.go:113] list returned 14 containers
	I0817 02:44:10.836368 1662846 cri.go:116] container: {ID:335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 Status:running}
	I0817 02:44:10.836387 1662846 cri.go:122] skipping {335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 running}: state = "running", want "paused"
	I0817 02:44:10.836402 1662846 cri.go:116] container: {ID:3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 Status:running}
	I0817 02:44:10.836408 1662846 cri.go:118] skipping 3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 - not in ps
	I0817 02:44:10.836413 1662846 cri.go:116] container: {ID:63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 Status:running}
	I0817 02:44:10.836423 1662846 cri.go:122] skipping {63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 running}: state = "running", want "paused"
	I0817 02:44:10.836429 1662846 cri.go:116] container: {ID:6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a Status:running}
	I0817 02:44:10.836439 1662846 cri.go:122] skipping {6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a running}: state = "running", want "paused"
	I0817 02:44:10.836444 1662846 cri.go:116] container: {ID:73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb Status:running}
	I0817 02:44:10.836454 1662846 cri.go:118] skipping 73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb - not in ps
	I0817 02:44:10.836458 1662846 cri.go:116] container: {ID:771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa Status:running}
	I0817 02:44:10.836463 1662846 cri.go:118] skipping 771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa - not in ps
	I0817 02:44:10.836467 1662846 cri.go:116] container: {ID:7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 Status:running}
	I0817 02:44:10.836477 1662846 cri.go:118] skipping 7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 - not in ps
	I0817 02:44:10.836481 1662846 cri.go:116] container: {ID:aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 Status:running}
	I0817 02:44:10.836493 1662846 cri.go:122] skipping {aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 running}: state = "running", want "paused"
	I0817 02:44:10.836499 1662846 cri.go:116] container: {ID:b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec Status:running}
	I0817 02:44:10.836512 1662846 cri.go:118] skipping b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec - not in ps
	I0817 02:44:10.836518 1662846 cri.go:116] container: {ID:bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 Status:running}
	I0817 02:44:10.836524 1662846 cri.go:118] skipping bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 - not in ps
	I0817 02:44:10.836528 1662846 cri.go:116] container: {ID:ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c Status:running}
	I0817 02:44:10.836533 1662846 cri.go:118] skipping ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c - not in ps
	I0817 02:44:10.836537 1662846 cri.go:116] container: {ID:ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 Status:running}
	I0817 02:44:10.836542 1662846 cri.go:122] skipping {ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 running}: state = "running", want "paused"
	I0817 02:44:10.836547 1662846 cri.go:116] container: {ID:f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 Status:running}
	I0817 02:44:10.836553 1662846 cri.go:122] skipping {f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 running}: state = "running", want "paused"
	I0817 02:44:10.836562 1662846 cri.go:116] container: {ID:fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 Status:running}
	I0817 02:44:10.836567 1662846 cri.go:122] skipping {fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 running}: state = "running", want "paused"
	I0817 02:44:10.836606 1662846 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 02:44:10.842672 1662846 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 02:44:10.842686 1662846 kubeadm.go:600] restartCluster start
	I0817 02:44:10.842722 1662846 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 02:44:10.848569 1662846 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:44:10.849505 1662846 kubeconfig.go:93] found "pause-20210817024148-1554185" server: "https://192.168.49.2:8443"
	I0817 02:44:10.850203 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:10.851914 1662846 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 02:44:10.861281 1662846 api_server.go:164] Checking apiserver status ...
	I0817 02:44:10.861344 1662846 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:10.872606 1662846 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1140/cgroup
	I0817 02:44:10.878948 1662846 api_server.go:180] apiserver freezer: "6:freezer:/docker/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/kubepods/burstable/poda97ec150b358b2334cd33dc4c454d661/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:10.879031 1662846 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/kubepods/burstable/poda97ec150b358b2334cd33dc4c454d661/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/freezer.state
	I0817 02:44:10.887614 1662846 api_server.go:202] freezer state: "THAWED"
	I0817 02:44:10.887653 1662846 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 02:44:10.897405 1662846 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 02:44:10.921793 1662846 system_pods.go:86] 7 kube-system pods found
	I0817 02:44:10.921826 1662846 system_pods.go:89] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:10.921833 1662846 system_pods.go:89] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:10.921837 1662846 system_pods.go:89] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:10.921846 1662846 system_pods.go:89] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:10.921851 1662846 system_pods.go:89] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:10.921860 1662846 system_pods.go:89] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:10.921864 1662846 system_pods.go:89] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:10.922656 1662846 api_server.go:139] control plane version: v1.21.3
	I0817 02:44:10.922674 1662846 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.49.2
	I0817 02:44:10.922683 1662846 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0817 02:44:10.922688 1662846 kubeadm.go:604] restartCluster took 79.997602ms
	I0817 02:44:10.922692 1662846 kubeadm.go:392] StartCluster complete in 147.020078ms
	I0817 02:44:10.922711 1662846 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:10.922795 1662846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:44:10.923814 1662846 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:10.924639 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:10.927764 1662846 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210817024148-1554185" rescaled to 1
	I0817 02:44:10.927819 1662846 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 02:44:10.929557 1662846 out.go:177] * Verifying Kubernetes components...
	I0817 02:44:10.929621 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:10.928056 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:44:10.928073 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:44:10.928083 1662846 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 02:44:10.929772 1662846 addons.go:59] Setting storage-provisioner=true in profile "pause-20210817024148-1554185"
	I0817 02:44:10.929796 1662846 addons.go:135] Setting addon storage-provisioner=true in "pause-20210817024148-1554185"
	W0817 02:44:10.929827 1662846 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:44:10.929865 1662846 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:10.930344 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:10.935094 1662846 addons.go:59] Setting default-storageclass=true in profile "pause-20210817024148-1554185"
	I0817 02:44:10.935122 1662846 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210817024148-1554185"
	I0817 02:44:10.935399 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:11.011181 1662846 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:44:11.011290 1662846 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:11.011301 1662846 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:44:11.011350 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:44:11.015016 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:11.019001 1662846 addons.go:135] Setting addon default-storageclass=true in "pause-20210817024148-1554185"
	W0817 02:44:11.019019 1662846 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:44:11.019042 1662846 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:11.019478 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:11.072649 1662846 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:11.072687 1662846 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:44:11.072739 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:44:11.092036 1662846 node_ready.go:35] waiting up to 6m0s for node "pause-20210817024148-1554185" to be "Ready" ...
	I0817 02:44:11.092329 1662846 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 02:44:11.095935 1662846 node_ready.go:49] node "pause-20210817024148-1554185" has status "Ready":"True"
	I0817 02:44:11.095950 1662846 node_ready.go:38] duration metric: took 3.885427ms waiting for node "pause-20210817024148-1554185" to be "Ready" ...
	I0817 02:44:11.095958 1662846 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:44:11.105426 1662846 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.115130 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:44:11.136809 1662846 pod_ready.go:92] pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.136824 1662846 pod_ready.go:81] duration metric: took 31.377737ms waiting for pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.136834 1662846 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.140355 1662846 pod_ready.go:92] pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.140372 1662846 pod_ready.go:81] duration metric: took 3.530681ms waiting for pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.140384 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.145229 1662846 pod_ready.go:92] pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.145269 1662846 pod_ready.go:81] duration metric: took 4.874316ms waiting for pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.145292 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.155084 1662846 pod_ready.go:92] pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.155097 1662846 pod_ready.go:81] duration metric: took 9.787982ms waiting for pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.155105 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6fvl" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.159276 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:44:11.210907 1662846 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:11.257270 1662846 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:11.502673 1662846 pod_ready.go:92] pod "kube-proxy-h6fvl" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.502728 1662846 pod_ready.go:81] duration metric: took 347.614714ms waiting for pod "kube-proxy-h6fvl" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.502752 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:07.789502 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:07.789527 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:07.789539 1660780 retry.go:31] will retry after 951.868007ms: only 1 pod(s) have shown up
	I0817 02:44:08.743829 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:08.743853 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:08.743868 1660780 retry.go:31] will retry after 1.341783893s: only 1 pod(s) have shown up
	I0817 02:44:10.088004 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:10.088035 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:10.088048 1660780 retry.go:31] will retry after 1.876813009s: only 1 pod(s) have shown up
	I0817 02:44:11.967374 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:11.967401 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:11.967413 1660780 retry.go:31] will retry after 2.6934314s: only 1 pod(s) have shown up
	I0817 02:44:11.600462 1662846 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 02:44:11.600486 1662846 addons.go:344] enableAddons completed in 672.404962ms
	I0817 02:44:11.900577 1662846 pod_ready.go:92] pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.900629 1662846 pod_ready.go:81] duration metric: took 397.857202ms waiting for pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.900649 1662846 pod_ready.go:38] duration metric: took 804.679453ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:44:11.900677 1662846 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:44:11.900739 1662846 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:11.914199 1662846 api_server.go:70] duration metric: took 986.33934ms to wait for apiserver process to appear ...
	I0817 02:44:11.914238 1662846 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:44:11.914267 1662846 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 02:44:11.922723 1662846 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 02:44:11.923486 1662846 api_server.go:139] control plane version: v1.21.3
	I0817 02:44:11.923532 1662846 api_server.go:129] duration metric: took 9.277267ms to wait for apiserver health ...
	I0817 02:44:11.923552 1662846 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:44:12.113654 1662846 system_pods.go:59] 8 kube-system pods found
	I0817 02:44:12.113686 1662846 system_pods.go:61] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:12.113692 1662846 system_pods.go:61] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:12.113696 1662846 system_pods.go:61] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:12.113701 1662846 system_pods.go:61] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:12.113735 1662846 system_pods.go:61] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:12.113747 1662846 system_pods.go:61] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:12.113754 1662846 system_pods.go:61] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:12.113767 1662846 system_pods.go:61] "storage-provisioner" [562918b9-84e2-4f7e-9a0a-70742893e39d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 02:44:12.113773 1662846 system_pods.go:74] duration metric: took 190.207086ms to wait for pod list to return data ...
	I0817 02:44:12.113797 1662846 default_sa.go:34] waiting for default service account to be created ...
	I0817 02:44:12.300796 1662846 default_sa.go:45] found service account: "default"
	I0817 02:44:12.300822 1662846 default_sa.go:55] duration metric: took 187.014117ms for default service account to be created ...
	I0817 02:44:12.300830 1662846 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 02:44:12.506751 1662846 system_pods.go:86] 8 kube-system pods found
	I0817 02:44:12.506786 1662846 system_pods.go:89] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:12.506793 1662846 system_pods.go:89] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:12.510790 1662846 system_pods.go:89] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:12.510805 1662846 system_pods.go:89] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:12.510832 1662846 system_pods.go:89] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:12.510838 1662846 system_pods.go:89] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:12.510844 1662846 system_pods.go:89] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:12.510849 1662846 system_pods.go:89] "storage-provisioner" [562918b9-84e2-4f7e-9a0a-70742893e39d] Running
	I0817 02:44:12.510855 1662846 system_pods.go:126] duration metric: took 210.020669ms to wait for k8s-apps to be running ...
	I0817 02:44:12.510862 1662846 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 02:44:12.510915 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:12.520616 1662846 system_svc.go:56] duration metric: took 9.75179ms WaitForService to wait for kubelet.
	I0817 02:44:12.520637 1662846 kubeadm.go:547] duration metric: took 1.592794882s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 02:44:12.520657 1662846 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:44:12.701568 1662846 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:44:12.701598 1662846 node_conditions.go:123] node cpu capacity is 2
	I0817 02:44:12.701610 1662846 node_conditions.go:105] duration metric: took 180.94709ms to run NodePressure ...
	I0817 02:44:12.701620 1662846 start.go:231] waiting for startup goroutines ...
	I0817 02:44:12.753251 1662846 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 02:44:12.756175 1662846 out.go:177] * Done! kubectl is now configured to use "pause-20210817024148-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	cf9fe43a28990       ba04bb24b9575       5 seconds ago        Running             storage-provisioner       0                   62792bf694eb6
	6f0de758f96ce       1a1f05a2cd7c2       23 seconds ago       Running             coredns                   0                   b107ef4ef1079
	335440e08b6b6       f37b7c809e5dc       About a minute ago   Running             kindnet-cni               0                   ec238e8d3a6b2
	aad2134f4047a       4ea38350a1beb       About a minute ago   Running             kube-proxy                0                   771e9a30f4bda
	ec4892b38d019       44a6d50ef170d       About a minute ago   Running             kube-apiserver            0                   7a53464dc6cc7
	f45a4f177814d       cb310ff289d79       About a minute ago   Running             kube-controller-manager   0                   73b440ce137c2
	fb735a50aaaf9       05b738aa1bc63       About a minute ago   Running             etcd                      0                   3daddbac69e62
	63836f8fc4c5a       31a3b96cefc1e       About a minute ago   Running             kube-scheduler            0                   bb150a03bb9cc
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 02:41:51 UTC, end at Tue 2021-08-17 02:44:17 UTC. --
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204239884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204253184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204285201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204412887Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204478118Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0x40003d0f60 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPlug
inConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.4.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204553079Z" level=info msg="Connect containerd service"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204613452Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205685425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205900471Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205941102Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 17 02:43:59 pause-20210817024148-1554185 systemd[1]: Started containerd container runtime.
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.207100615Z" level=info msg="containerd successfully booted in 0.049192s"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.211900478Z" level=info msg="Start subscribing containerd event"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.220448431Z" level=info msg="Start recovering state"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299199088Z" level=info msg="Start event monitor"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299328802Z" level=info msg="Start snapshots syncer"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299384194Z" level=info msg="Start cni network conf syncer"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299434393Z" level=info msg="Start streaming server"
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.892999886Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:562918b9-84e2-4f7e-9a0a-70742893e39d,Namespace:kube-system,Attempt:0,}"
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.920384792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d pid=2435
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.993666940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:562918b9-84e2-4f7e-9a0a-70742893e39d,Namespace:kube-system,Attempt:0,} returns sandbox id \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\""
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.996018212Z" level=info msg="CreateContainer within sandbox \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.024621543Z" level=info msg="CreateContainer within sandbox \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\""
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.025144015Z" level=info msg="StartContainer for \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\""
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.093587482Z" level=info msg="StartContainer for \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\" returns successfully"
	
	* 
	* ==> coredns [6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210817024148-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-20210817024148-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=pause-20210817024148-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T02_42_50_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 02:42:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210817024148-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 02:44:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:43:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    pause-20210817024148-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                1148c453-a7b1-434d-b3fe-0e100988f0a3
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-bzchw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     73s
	  kube-system                 etcd-pause-20210817024148-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         88s
	  kube-system                 kindnet-9lnwm                                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      73s
	  kube-system                 kube-apiserver-pause-20210817024148-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-controller-manager-pause-20210817024148-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-proxy-h6fvl                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-scheduler-pause-20210817024148-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  99s (x5 over 99s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x5 over 99s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x4 over 99s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientPID
	  Normal  Starting                 79s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s                kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s                kubelet     Node pause-20210817024148-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s                kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 72s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                29s                kubelet     Node pause-20210817024148-1554185 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> etcd [fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583] <==
	* 2021-08-17 02:42:39.573167 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/17 02:42:39 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 02:42:39.573411 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 02:42:40 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 02:42:40.459131 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 02:42:40.465042 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 02:42:40.465114 I | etcdserver: published {Name:pause-20210817024148-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 02:42:40.465227 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 02:42:40.465256 I | embed: ready to serve client requests
	2021-08-17 02:42:40.469660 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 02:42:40.469974 I | embed: ready to serve client requests
	2021-08-17 02:42:40.471130 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 02:42:49.193636 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:03.920345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:08.512078 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:18.511903 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:28.515027 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:38.512484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:48.512959 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:58.512373 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:44:08.511866 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  02:44:17 up 10:26,  0 users,  load average: 2.28, 1.70, 1.25
	Linux pause-20210817024148-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2] <==
	* I0817 02:42:46.856814       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 02:42:46.869670       1 controller.go:611] quota admission added evaluator for: namespaces
	I0817 02:42:46.901749       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0817 02:42:47.639727       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 02:42:47.639906       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 02:42:47.665038       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0817 02:42:47.669713       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0817 02:42:47.669739       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 02:42:48.274054       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 02:42:48.310293       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 02:42:48.398617       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0817 02:42:48.400201       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 02:42:48.403641       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 02:42:49.313095       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 02:42:49.844292       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 02:42:49.897658       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 02:42:58.279367       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 02:43:04.136440       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0817 02:43:04.199838       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0817 02:43:21.010651       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:43:21.010876       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:43:21.010973       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:43:51.301051       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:43:51.301091       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:43:51.301099       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367] <==
	* I0817 02:43:03.483116       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0817 02:43:03.483462       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0817 02:43:03.483847       1 event.go:291] "Event occurred" object="pause-20210817024148-1554185" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210817024148-1554185 event: Registered Node pause-20210817024148-1554185 in Controller"
	I0817 02:43:03.491024       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0817 02:43:03.525270       1 shared_informer.go:247] Caches are synced for HPA 
	I0817 02:43:03.531014       1 shared_informer.go:247] Caches are synced for attach detach 
	I0817 02:43:03.531095       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0817 02:43:03.531106       1 shared_informer.go:247] Caches are synced for endpoint 
	I0817 02:43:03.542828       1 shared_informer.go:247] Caches are synced for disruption 
	I0817 02:43:03.542888       1 disruption.go:371] Sending events to api server.
	I0817 02:43:03.543000       1 shared_informer.go:247] Caches are synced for job 
	I0817 02:43:03.543063       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0817 02:43:03.593684       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:43:03.657513       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:43:04.079136       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:43:04.079314       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 02:43:04.127990       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:43:04.138879       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0817 02:43:04.216419       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h6fvl"
	I0817 02:43:04.226354       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9lnwm"
	I0817 02:43:04.391394       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0817 02:43:04.400006       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-z7v6b"
	I0817 02:43:04.411923       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-bzchw"
	I0817 02:43:04.436039       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-z7v6b"
	I0817 02:43:48.489538       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66] <==
	* I0817 02:43:05.040138       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 02:43:05.040427       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 02:43:05.040569       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 02:43:05.066321       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 02:43:05.066475       1 server_others.go:212] Using iptables Proxier.
	I0817 02:43:05.066558       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 02:43:05.066632       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 02:43:05.067006       1 server.go:643] Version: v1.21.3
	I0817 02:43:05.067885       1 config.go:315] Starting service config controller
	I0817 02:43:05.068016       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 02:43:05.068105       1 config.go:224] Starting endpoint slice config controller
	I0817 02:43:05.068187       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 02:43:05.075717       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 02:43:05.079542       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 02:43:05.169159       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 02:43:05.169216       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08] <==
	* E0817 02:42:46.829872       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:42:46.829928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0817 02:42:46.830223       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 02:42:46.830599       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:42:46.830655       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 02:42:46.830711       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:42:46.830755       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:42:46.833525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.833686       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:42:46.833809       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.834107       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 02:42:46.840341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.843238       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 02:42:47.692486       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.720037       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:42:47.720290       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 02:42:47.763006       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.788725       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.934043       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:42:47.972589       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:42:47.976875       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 02:42:47.998841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:42:48.041214       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:42:48.197544       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 02:42:49.931904       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 02:41:51 UTC, end at Tue 2021-08-17 02:44:18 UTC. --
	Aug 17 02:43:04 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:04.379642    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe-xtables-lock\") pod \"kube-proxy-h6fvl\" (UID: \"e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe\") "
	Aug 17 02:43:04 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:04.379734    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdls7\" (UniqueName: \"kubernetes.io/projected/e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe-kube-api-access-rdls7\") pod \"kube-proxy-h6fvl\" (UID: \"e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe\") "
	Aug 17 02:43:04 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:04.379824    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe-kube-proxy\") pod \"kube-proxy-h6fvl\" (UID: \"e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe\") "
	Aug 17 02:43:08 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:08.432586    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:13 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:13.433632    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:18 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:18.434613    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:23 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:23.436047    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:28 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:28.437251    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:33 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:33.438786    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:53 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:53.010477    1188 topology_manager.go:187] "Topology Admit Handler"
	Aug 17 02:43:53 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:53.042619    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpt2g\" (UniqueName: \"kubernetes.io/projected/5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc-kube-api-access-tpt2g\") pod \"coredns-558bd4d5db-bzchw\" (UID: \"5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc\") "
	Aug 17 02:43:53 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:53.042805    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc-config-volume\") pod \"coredns-558bd4d5db-bzchw\" (UID: \"5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc\") "
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: W0817 02:43:59.062156    1188 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: W0817 02:43:59.062420    1188 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: W0817 02:43:59.162663    1188 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:59.464667    1188 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="nil"
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:59.465014    1188 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:59.465038    1188 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 17 02:44:11 pause-20210817024148-1554185 kubelet[1188]: I0817 02:44:11.555946    1188 topology_manager.go:187] "Topology Admit Handler"
	Aug 17 02:44:11 pause-20210817024148-1554185 kubelet[1188]: I0817 02:44:11.681239    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvv4f\" (UniqueName: \"kubernetes.io/projected/562918b9-84e2-4f7e-9a0a-70742893e39d-kube-api-access-vvv4f\") pod \"storage-provisioner\" (UID: \"562918b9-84e2-4f7e-9a0a-70742893e39d\") "
	Aug 17 02:44:11 pause-20210817024148-1554185 kubelet[1188]: I0817 02:44:11.681340    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/562918b9-84e2-4f7e-9a0a-70742893e39d-tmp\") pod \"storage-provisioner\" (UID: \"562918b9-84e2-4f7e-9a0a-70742893e39d\") "
	Aug 17 02:44:13 pause-20210817024148-1554185 kubelet[1188]: I0817 02:44:13.148324    1188 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 17 02:44:13 pause-20210817024148-1554185 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 17 02:44:13 pause-20210817024148-1554185 systemd[1]: kubelet.service: Succeeded.
	Aug 17 02:44:13 pause-20210817024148-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42] <==
	* I0817 02:44:12.092161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 02:44:12.117851       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 02:44:12.117933       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 02:44:12.144567       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 02:44:12.144687       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7dcc4aca-a2da-4802-9687-f8a1d81928d3", APIVersion:"v1", ResourceVersion:"515", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10 became leader
	I0817 02:44:12.145088       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10!
	I0817 02:44:12.245861       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185: exit status 2 (377.404814ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210817024148-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210817024148-1554185 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210817024148-1554185 describe pod : exit status 1 (68.588516ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210817024148-1554185 describe pod : exit status 1
--- FAIL: TestPause/serial/Pause (5.96s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (2.19s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-20210817024148-1554185 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-20210817024148-1554185 --output=json --layout=cluster: exit status 2 (331.312563ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210817024148-1554185","StatusCode":101,"StatusName":"Pausing","Step":"Pausing","StepDetail":"* Pausing node pause-20210817024148-1554185 ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210817024148-1554185","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":200,"StatusName":"OK"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 02:44:19.075854 1664783 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0817 02:44:19.075885 1664783 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0817 02:44:19.075907 1664783 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax

                                                
                                                
** /stderr **
pause_test.go:190: incorrect status code: 101
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210817024148-1554185
helpers_test.go:236: (dbg) docker inspect pause-20210817024148-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b",
	        "Created": "2021-08-17T02:41:50.320902147Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1657229,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T02:41:51.004888651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/hosts",
	        "LogPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b-json.log",
	        "Name": "/pause-20210817024148-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-20210817024148-1554185:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210817024148-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20210817024148-1554185",
	                "Source": "/var/lib/docker/volumes/pause-20210817024148-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210817024148-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210817024148-1554185",
	                "name.minikube.sigs.k8s.io": "pause-20210817024148-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d8a8e6cd79da22a5765a51578b1ea6e8efa8e27c6c5dbb571e80d79023db3847",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50410"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50409"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50406"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50408"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50407"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d8a8e6cd79da",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210817024148-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b9b1ad2a3171",
	                        "pause-20210817024148-1554185"
	                    ],
	                    "NetworkID": "747733296a426a6f52daff293191c7fb9ea960ba5380b91809f97050286a1932",
	                    "EndpointID": "54b17c5460167eb93db2a6807c51835973c485175b87062600e587d432698b14",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185: exit status 2 (316.594372ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p pause-20210817024148-1554185 logs -n 25
helpers_test.go:253: TestPause/serial/VerifyStatus logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                             Args                              |                   Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:48 UTC | Tue, 17 Aug 2021 02:29:48 UTC |
	|         | ssh sudo cat                                                  |                                             |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                      |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185 cp testdata/cp-test.txt      | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:48 UTC | Tue, 17 Aug 2021 02:29:48 UTC |
	|         | multinode-20210817022620-1554185-m02:/home/docker/cp-test.txt |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:48 UTC | Tue, 17 Aug 2021 02:29:48 UTC |
	|         | ssh -n                                                        |                                             |         |         |                               |                               |
	|         | multinode-20210817022620-1554185-m02                          |                                             |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                             |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185 cp testdata/cp-test.txt      | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:29:49 UTC |
	|         | multinode-20210817022620-1554185-m03:/home/docker/cp-test.txt |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:29:49 UTC |
	|         | ssh -n                                                        |                                             |         |         |                               |                               |
	|         | multinode-20210817022620-1554185-m03                          |                                             |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                             |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:30:09 UTC |
	|         | node stop m03                                                 |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:30:10 UTC | Tue, 17 Aug 2021 02:30:40 UTC |
	|         | node start m03 --alsologtostderr                              |                                             |         |         |                               |                               |
	| stop    | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:30:41 UTC | Tue, 17 Aug 2021 02:31:41 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	| start   | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:31:41 UTC | Tue, 17 Aug 2021 02:34:02 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	|         | --wait=true -v=8                                              |                                             |         |         |                               |                               |
	|         | --alsologtostderr                                             |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:34:02 UTC | Tue, 17 Aug 2021 02:34:26 UTC |
	|         | node delete m03                                               |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:34:27 UTC | Tue, 17 Aug 2021 02:35:07 UTC |
	|         | stop                                                          |                                             |         |         |                               |                               |
	| start   | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:35:07 UTC | Tue, 17 Aug 2021 02:36:46 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	|         | --wait=true -v=8                                              |                                             |         |         |                               |                               |
	|         | --alsologtostderr                                             |                                             |         |         |                               |                               |
	|         | --driver=docker                                               |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| start   | -p                                                            | multinode-20210817022620-1554185-m03        | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:36:47 UTC | Tue, 17 Aug 2021 02:37:57 UTC |
	|         | multinode-20210817022620-1554185-m03                          |                                             |         |         |                               |                               |
	|         | --driver=docker                                               |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| delete  | -p                                                            | multinode-20210817022620-1554185-m03        | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:37:57 UTC | Tue, 17 Aug 2021 02:38:00 UTC |
	|         | multinode-20210817022620-1554185-m03                          |                                             |         |         |                               |                               |
	| delete  | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:38:00 UTC | Tue, 17 Aug 2021 02:38:04 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	| start   | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:39:35 UTC | Tue, 17 Aug 2021 02:40:42 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	|         | --memory=2048 --driver=docker                                 |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| stop    | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:40:42 UTC | Tue, 17 Aug 2021 02:40:42 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	|         | --cancel-scheduled                                            |                                             |         |         |                               |                               |
	| stop    | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:40:55 UTC | Tue, 17 Aug 2021 02:41:20 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	|         | --schedule 5s                                                 |                                             |         |         |                               |                               |
	| delete  | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:20 UTC | Tue, 17 Aug 2021 02:41:25 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	| delete  | -p                                                            | insufficient-storage-20210817024125-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:42 UTC | Tue, 17 Aug 2021 02:41:48 UTC |
	|         | insufficient-storage-20210817024125-1554185                   |                                             |         |         |                               |                               |
	| delete  | -p                                                            | missing-upgrade-20210817024148-1554185      | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:43:02 UTC | Tue, 17 Aug 2021 02:43:07 UTC |
	|         | missing-upgrade-20210817024148-1554185                        |                                             |         |         |                               |                               |
	| start   | -p                                                            | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:48 UTC | Tue, 17 Aug 2021 02:43:55 UTC |
	|         | pause-20210817024148-1554185                                  |                                             |         |         |                               |                               |
	|         | --memory=2048                                                 |                                             |         |         |                               |                               |
	|         | --install-addons=false                                        |                                             |         |         |                               |                               |
	|         | --wait=all --driver=docker                                    |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| start   | -p                                                            | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:43:55 UTC | Tue, 17 Aug 2021 02:44:12 UTC |
	|         | pause-20210817024148-1554185                                  |                                             |         |         |                               |                               |
	|         | --alsologtostderr                                             |                                             |         |         |                               |                               |
	|         | -v=1 --driver=docker                                          |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| -p      | pause-20210817024148-1554185                                  | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:15 UTC | Tue, 17 Aug 2021 02:44:15 UTC |
	|         | logs -n 25                                                    |                                             |         |         |                               |                               |
	| -p      | pause-20210817024148-1554185                                  | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:16 UTC | Tue, 17 Aug 2021 02:44:18 UTC |
	|         | logs -n 25                                                    |                                             |         |         |                               |                               |
	|---------|---------------------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 02:43:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 02:43:55.935620 1662846 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:43:55.935723 1662846 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:43:55.935738 1662846 out.go:311] Setting ErrFile to fd 2...
	I0817 02:43:55.935766 1662846 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:43:55.935946 1662846 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:43:55.936251 1662846 out.go:305] Setting JSON to false
	I0817 02:43:55.937622 1662846 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37574,"bootTime":1629130662,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 02:43:55.937719 1662846 start.go:121] virtualization:  
	I0817 02:43:55.939817 1662846 out.go:177] * [pause-20210817024148-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 02:43:55.941669 1662846 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 02:43:55.940702 1662846 notify.go:169] Checking for updates...
	I0817 02:43:55.943679 1662846 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:43:55.945437 1662846 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 02:43:55.946802 1662846 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 02:43:55.947210 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:43:55.947633 1662846 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 02:43:56.027823 1662846 docker.go:132] docker version: linux-20.10.8
	I0817 02:43:56.027923 1662846 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:43:56.176370 1662846 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-17 02:43:56.090848407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:43:56.176502 1662846 docker.go:244] overlay module found
	I0817 02:43:56.179749 1662846 out.go:177] * Using the docker driver based on existing profile
	I0817 02:43:56.179775 1662846 start.go:278] selected driver: docker
	I0817 02:43:56.179782 1662846 start.go:751] validating driver "docker" against &{Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:43:56.179866 1662846 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 02:43:56.179980 1662846 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:43:56.292126 1662846 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-17 02:43:56.216837922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:43:56.292468 1662846 cni.go:93] Creating CNI manager for ""
	I0817 02:43:56.292486 1662846 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:43:56.292501 1662846 start_flags.go:277] config:
	{Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:43:56.294520 1662846 out.go:177] * Starting control plane node pause-20210817024148-1554185 in cluster pause-20210817024148-1554185
	I0817 02:43:56.294554 1662846 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 02:43:56.296008 1662846 out.go:177] * Pulling base image ...
	I0817 02:43:56.296031 1662846 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:43:56.296059 1662846 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 02:43:56.296077 1662846 cache.go:56] Caching tarball of preloaded images
	I0817 02:43:56.296206 1662846 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 02:43:56.296231 1662846 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 02:43:56.296337 1662846 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/config.json ...
	I0817 02:43:56.296506 1662846 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 02:43:56.358839 1662846 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 02:43:56.358863 1662846 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 02:43:56.358876 1662846 cache.go:205] Successfully downloaded all kic artifacts
	I0817 02:43:56.358910 1662846 start.go:313] acquiring machines lock for pause-20210817024148-1554185: {Name:mk43ad0c6625870b459afd5900940b78473b954e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 02:43:56.358994 1662846 start.go:317] acquired machines lock for "pause-20210817024148-1554185" in 57.583µs
	I0817 02:43:56.359016 1662846 start.go:93] Skipping create...Using existing machine configuration
	I0817 02:43:56.359025 1662846 fix.go:55] fixHost starting: 
	I0817 02:43:56.359303 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:43:56.396845 1662846 fix.go:108] recreateIfNeeded on pause-20210817024148-1554185: state=Running err=<nil>
	W0817 02:43:56.396879 1662846 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 02:43:56.399120 1662846 out.go:177] * Updating the running docker "pause-20210817024148-1554185" container ...
	I0817 02:43:56.399143 1662846 machine.go:88] provisioning docker machine ...
	I0817 02:43:56.399156 1662846 ubuntu.go:169] provisioning hostname "pause-20210817024148-1554185"
	I0817 02:43:56.399223 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:56.460270 1662846 main.go:130] libmachine: Using SSH client type: native
	I0817 02:43:56.460437 1662846 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50410 <nil> <nil>}
	I0817 02:43:56.460450 1662846 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210817024148-1554185 && echo "pause-20210817024148-1554185" | sudo tee /etc/hostname
	I0817 02:43:56.592739 1662846 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210817024148-1554185
	
	I0817 02:43:56.592882 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:56.636319 1662846 main.go:130] libmachine: Using SSH client type: native
	I0817 02:43:56.636499 1662846 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50410 <nil> <nil>}
	I0817 02:43:56.636520 1662846 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210817024148-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210817024148-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210817024148-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 02:43:56.775961 1662846 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 02:43:56.775984 1662846 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 02:43:56.776018 1662846 ubuntu.go:177] setting up certificates
	I0817 02:43:56.776029 1662846 provision.go:83] configureAuth start
	I0817 02:43:56.776079 1662846 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210817024148-1554185
	I0817 02:43:56.827590 1662846 provision.go:138] copyHostCerts
	I0817 02:43:56.827646 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 02:43:56.827654 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 02:43:56.827713 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 02:43:56.827792 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 02:43:56.827799 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 02:43:56.827820 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 02:43:56.827872 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 02:43:56.827880 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 02:43:56.827900 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 02:43:56.827946 1662846 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.pause-20210817024148-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210817024148-1554185]
	I0817 02:43:57.691741 1662846 provision.go:172] copyRemoteCerts
	I0817 02:43:57.691838 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 02:43:57.691973 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:57.733998 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:57.822192 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 02:43:57.856857 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 02:43:57.883796 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 02:43:57.906578 1662846 provision.go:86] duration metric: configureAuth took 1.130540743s
	I0817 02:43:57.906595 1662846 ubuntu.go:193] setting minikube options for container-runtime
	I0817 02:43:57.906755 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:43:57.906762 1662846 machine.go:91] provisioned docker machine in 1.507614043s
	I0817 02:43:57.906767 1662846 start.go:267] post-start starting for "pause-20210817024148-1554185" (driver="docker")
	I0817 02:43:57.906773 1662846 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 02:43:57.906827 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 02:43:57.906865 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:57.948809 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.037379 1662846 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 02:43:58.040800 1662846 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 02:43:58.040823 1662846 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 02:43:58.040834 1662846 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 02:43:58.040841 1662846 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 02:43:58.040851 1662846 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 02:43:58.040896 1662846 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 02:43:58.040978 1662846 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 02:43:58.041076 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 02:43:58.047645 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:43:58.063186 1662846 start.go:270] post-start completed in 156.406695ms
	I0817 02:43:58.063236 1662846 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 02:43:58.063283 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.095312 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.179965 1662846 fix.go:57] fixHost completed within 1.82093535s
	I0817 02:43:58.179990 1662846 start.go:80] releasing machines lock for "pause-20210817024148-1554185", held for 1.820983908s
	I0817 02:43:58.180071 1662846 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210817024148-1554185
	I0817 02:43:58.213692 1662846 ssh_runner.go:149] Run: systemctl --version
	I0817 02:43:58.213738 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.213787 1662846 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 02:43:58.213879 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.294808 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.305791 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.579632 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 02:43:58.595650 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 02:43:58.606604 1662846 docker.go:153] disabling docker service ...
	I0817 02:43:58.606667 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 02:43:58.617075 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 02:43:58.626385 1662846 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 02:43:58.766845 1662846 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 02:43:58.893614 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 02:43:58.903792 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 02:43:58.915967 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 02:43:58.928706 1662846 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 02:43:58.935023 1662846 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 02:43:58.941385 1662846 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 02:43:59.052351 1662846 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 02:43:59.207573 1662846 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 02:43:59.207636 1662846 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 02:43:59.211818 1662846 start.go:413] Will wait 60s for crictl version
	I0817 02:43:59.211935 1662846 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:43:59.253079 1662846 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T02:43:59Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 02:44:02.120599 1660780 out.go:204]   - Configuring RBAC rules ...
	I0817 02:44:02.550006 1660780 cni.go:93] Creating CNI manager for ""
	I0817 02:44:02.550033 1660780 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:44:02.551994 1660780 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:44:02.552050 1660780 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:44:02.555689 1660780 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0817 02:44:02.555706 1660780 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:44:02.567554 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:44:03.257879 1660780 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:44:03.258050 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=kubernetes-upgrade-20210817024307-1554185 minikube.k8s.io/updated_at=2021_08_17T02_44_03_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:44:03.258163 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:44:03.421126 1660780 kubeadm.go:985] duration metric: took 163.181168ms to wait for elevateKubeSystemPrivileges.
	I0817 02:44:03.421159 1660780 ops.go:34] apiserver oom_adj: 16
	I0817 02:44:03.421165 1660780 ops.go:39] adjusting apiserver oom_adj to -10
	I0817 02:44:03.421175 1660780 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:44:03.435297 1660780 kubeadm.go:392] StartCluster complete in 20.942151429s
	I0817 02:44:03.435324 1660780 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:03.435396 1660780 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:44:03.436729 1660780 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:03.437591 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:03.960675 1660780 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20210817024307-1554185" rescaled to 1
	I0817 02:44:03.960735 1660780 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0817 02:44:03.962792 1660780 out.go:177] * Verifying Kubernetes components...
	I0817 02:44:03.962884 1660780 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:03.960776 1660780 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:44:03.961119 1660780 config.go:177] Loaded profile config "kubernetes-upgrade-20210817024307-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0817 02:44:03.961133 1660780 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 02:44:03.963065 1660780 addons.go:59] Setting storage-provisioner=true in profile "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963086 1660780 addons.go:135] Setting addon storage-provisioner=true in "kubernetes-upgrade-20210817024307-1554185"
	W0817 02:44:03.963092 1660780 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:44:03.963117 1660780 host.go:66] Checking if "kubernetes-upgrade-20210817024307-1554185" exists ...
	I0817 02:44:03.963609 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:03.963742 1660780 addons.go:59] Setting default-storageclass=true in profile "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963759 1660780 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963983 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:04.042068 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:04.045974 1660780 addons.go:135] Setting addon default-storageclass=true in "kubernetes-upgrade-20210817024307-1554185"
	W0817 02:44:04.045994 1660780 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:44:04.046020 1660780 host.go:66] Checking if "kubernetes-upgrade-20210817024307-1554185" exists ...
	I0817 02:44:04.046775 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:04.067540 1660780 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:44:04.067635 1660780 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:04.067644 1660780 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:44:04.067698 1660780 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817024307-1554185
	I0817 02:44:04.134515 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:04.135773 1660780 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:44:04.135809 1660780 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:04.135958 1660780 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 02:44:04.162945 1660780 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:04.162964 1660780 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:44:04.163014 1660780 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817024307-1554185
	I0817 02:44:04.169842 1660780 api_server.go:70] duration metric: took 209.062582ms to wait for apiserver process to appear ...
	I0817 02:44:04.169860 1660780 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:44:04.169869 1660780 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:44:04.202845 1660780 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0817 02:44:04.203126 1660780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50433 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210817024307-1554185/id_rsa Username:docker}
	I0817 02:44:04.205563 1660780 api_server.go:139] control plane version: v1.14.0
	I0817 02:44:04.205585 1660780 api_server.go:129] duration metric: took 35.719651ms to wait for apiserver health ...
	I0817 02:44:04.205593 1660780 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:44:04.225295 1660780 system_pods.go:59] 0 kube-system pods found
	I0817 02:44:04.225327 1660780 retry.go:31] will retry after 305.063636ms: only 0 pod(s) have shown up
	I0817 02:44:04.236904 1660780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50433 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210817024307-1554185/id_rsa Username:docker}
	I0817 02:44:04.357326 1660780 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:04.437896 1660780 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:04.532218 1660780 system_pods.go:59] 0 kube-system pods found
	I0817 02:44:04.532272 1660780 retry.go:31] will retry after 338.212508ms: only 0 pod(s) have shown up
	I0817 02:44:04.546195 1660780 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0817 02:44:04.758200 1660780 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 02:44:04.758230 1660780 addons.go:344] enableAddons completed in 797.092673ms
	I0817 02:44:04.873249 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:04.873282 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:04.873317 1660780 retry.go:31] will retry after 378.459802ms: only 1 pod(s) have shown up
	I0817 02:44:05.254768 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:05.254794 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:05.254805 1660780 retry.go:31] will retry after 469.882201ms: only 1 pod(s) have shown up
	I0817 02:44:05.727524 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:05.727553 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:05.727565 1660780 retry.go:31] will retry after 667.365439ms: only 1 pod(s) have shown up
	I0817 02:44:06.397213 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:06.397242 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:06.397268 1660780 retry.go:31] will retry after 597.243124ms: only 1 pod(s) have shown up
	I0817 02:44:06.996568 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:06.996597 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:06.996622 1660780 retry.go:31] will retry after 789.889932ms: only 1 pod(s) have shown up
	I0817 02:44:10.303855 1662846 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:44:10.329831 1662846 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 02:44:10.329883 1662846 ssh_runner.go:149] Run: containerd --version
	I0817 02:44:10.350547 1662846 ssh_runner.go:149] Run: containerd --version
	I0817 02:44:10.372464 1662846 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 02:44:10.372542 1662846 cli_runner.go:115] Run: docker network inspect pause-20210817024148-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 02:44:10.403413 1662846 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 02:44:10.406786 1662846 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:44:10.406887 1662846 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:44:10.432961 1662846 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:44:10.432979 1662846 containerd.go:517] Images already preloaded, skipping extraction
	I0817 02:44:10.433017 1662846 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:44:10.456152 1662846 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:44:10.456171 1662846 cache_images.go:74] Images are preloaded, skipping loading
	I0817 02:44:10.456212 1662846 ssh_runner.go:149] Run: sudo crictl info
	I0817 02:44:10.478011 1662846 cni.go:93] Creating CNI manager for ""
	I0817 02:44:10.478033 1662846 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:44:10.478056 1662846 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 02:44:10.478081 1662846 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210817024148-1554185 NodeName:pause-20210817024148-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 02:44:10.478244 1662846 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "pause-20210817024148-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 02:44:10.478333 1662846 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20210817024148-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 02:44:10.478387 1662846 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 02:44:10.484940 1662846 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 02:44:10.484985 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 02:44:10.490924 1662846 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (573 bytes)
	I0817 02:44:10.502660 1662846 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 02:44:10.513948 1662846 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2078 bytes)
	I0817 02:44:10.524832 1662846 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 02:44:10.527527 1662846 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185 for IP: 192.168.49.2
	I0817 02:44:10.527570 1662846 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 02:44:10.527589 1662846 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 02:44:10.527638 1662846 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.key
	I0817 02:44:10.527664 1662846 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.key.dd3b5fb2
	I0817 02:44:10.527684 1662846 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.key
	I0817 02:44:10.527782 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 02:44:10.527819 1662846 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 02:44:10.527834 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 02:44:10.527857 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 02:44:10.527884 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 02:44:10.527924 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 02:44:10.527974 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:44:10.529038 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 02:44:10.544784 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 02:44:10.559660 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 02:44:10.574785 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 02:44:10.590201 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 02:44:10.605037 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 02:44:10.623387 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 02:44:10.639037 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 02:44:10.654135 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 02:44:10.669090 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 02:44:10.684622 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 02:44:10.699670 1662846 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 02:44:10.710765 1662846 ssh_runner.go:149] Run: openssl version
	I0817 02:44:10.717019 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 02:44:10.724137 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.726900 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.726944 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.731588 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 02:44:10.737384 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 02:44:10.743565 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.746327 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.746375 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.750493 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 02:44:10.756191 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 02:44:10.762612 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.765540 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.765579 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.769923 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 02:44:10.775678 1662846 kubeadm.go:390] StartCluster: {Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:44:10.775762 1662846 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 02:44:10.775824 1662846 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:44:10.802431 1662846 cri.go:76] found id: "6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a"
	I0817 02:44:10.802447 1662846 cri.go:76] found id: "335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0"
	I0817 02:44:10.802453 1662846 cri.go:76] found id: "aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66"
	I0817 02:44:10.802457 1662846 cri.go:76] found id: "ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:10.802462 1662846 cri.go:76] found id: "f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367"
	I0817 02:44:10.802470 1662846 cri.go:76] found id: "fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583"
	I0817 02:44:10.802478 1662846 cri.go:76] found id: "63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08"
	I0817 02:44:10.802488 1662846 cri.go:76] found id: ""
	I0817 02:44:10.802522 1662846 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:44:10.836121 1662846 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","pid":1575,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0/rootfs","created":"2021-08-17T02:43:04.897293883Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","pid":919,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60/rootfs","created":"2021-08-17T02:42:39.314001317Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210817024148-1554185_ed0234c9cb81abc8dc5bbcdfaf787883"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","pid":1063,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08/rootfs","created":"2021-08-17T02:42:39.515581049Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id"
:"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","pid":1895,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a/rootfs","created":"2021-08-17T02:43:53.705600139Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","rootfs":"/run/containerd
/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb/rootfs","created":"2021-08-17T02:42:39.34752748Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210817024148-1554185_0718a393987d1be3d2ae606f942a3f97"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","pid":1512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa/rootfs","created":"2021-08-17T02:43:04.691568232Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"771e9a30f4bd
ab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-h6fvl_e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","pid":991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9/rootfs","created":"2021-08-17T02:42:39.402749597Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210817024148-1554185_a97ec150b358b2334cd33dc4c454d661"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aad2134f4047a0355037c106be1e03aab147a921
033b3042e86497dd8533ae66","pid":1576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66/rootfs","created":"2021-08-17T02:43:04.899987912Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","pid":1861,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec/rootfs","created":"2021-08-17T02:43:53.53831483Z",
"annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-bzchw_5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","pid":931,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9/rootfs","created":"2021-08-17T02:42:39.343864386Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210817024148-1554185_b3d802d137e
d3edc177474465272f732"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","pid":1505,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c/rootfs","created":"2021-08-17T02:43:04.691547802Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9lnwm_8b4c4c45-1613-49c8-9b03-e13120205af4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2","pid":1140,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c466
5f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/rootfs","created":"2021-08-17T02:42:39.62877376Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","pid":1098,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367/rootfs","created":"2021-08-17T02:42:39.549869273Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607d
f03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","pid":1061,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583/rootfs","created":"2021-08-17T02:42:39.510915707Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60"},"owner":"root"}]
	I0817 02:44:10.836356 1662846 cri.go:113] list returned 14 containers
	I0817 02:44:10.836368 1662846 cri.go:116] container: {ID:335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 Status:running}
	I0817 02:44:10.836387 1662846 cri.go:122] skipping {335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 running}: state = "running", want "paused"
	I0817 02:44:10.836402 1662846 cri.go:116] container: {ID:3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 Status:running}
	I0817 02:44:10.836408 1662846 cri.go:118] skipping 3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 - not in ps
	I0817 02:44:10.836413 1662846 cri.go:116] container: {ID:63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 Status:running}
	I0817 02:44:10.836423 1662846 cri.go:122] skipping {63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 running}: state = "running", want "paused"
	I0817 02:44:10.836429 1662846 cri.go:116] container: {ID:6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a Status:running}
	I0817 02:44:10.836439 1662846 cri.go:122] skipping {6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a running}: state = "running", want "paused"
	I0817 02:44:10.836444 1662846 cri.go:116] container: {ID:73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb Status:running}
	I0817 02:44:10.836454 1662846 cri.go:118] skipping 73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb - not in ps
	I0817 02:44:10.836458 1662846 cri.go:116] container: {ID:771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa Status:running}
	I0817 02:44:10.836463 1662846 cri.go:118] skipping 771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa - not in ps
	I0817 02:44:10.836467 1662846 cri.go:116] container: {ID:7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 Status:running}
	I0817 02:44:10.836477 1662846 cri.go:118] skipping 7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 - not in ps
	I0817 02:44:10.836481 1662846 cri.go:116] container: {ID:aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 Status:running}
	I0817 02:44:10.836493 1662846 cri.go:122] skipping {aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 running}: state = "running", want "paused"
	I0817 02:44:10.836499 1662846 cri.go:116] container: {ID:b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec Status:running}
	I0817 02:44:10.836512 1662846 cri.go:118] skipping b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec - not in ps
	I0817 02:44:10.836518 1662846 cri.go:116] container: {ID:bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 Status:running}
	I0817 02:44:10.836524 1662846 cri.go:118] skipping bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 - not in ps
	I0817 02:44:10.836528 1662846 cri.go:116] container: {ID:ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c Status:running}
	I0817 02:44:10.836533 1662846 cri.go:118] skipping ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c - not in ps
	I0817 02:44:10.836537 1662846 cri.go:116] container: {ID:ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 Status:running}
	I0817 02:44:10.836542 1662846 cri.go:122] skipping {ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 running}: state = "running", want "paused"
	I0817 02:44:10.836547 1662846 cri.go:116] container: {ID:f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 Status:running}
	I0817 02:44:10.836553 1662846 cri.go:122] skipping {f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 running}: state = "running", want "paused"
	I0817 02:44:10.836562 1662846 cri.go:116] container: {ID:fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 Status:running}
	I0817 02:44:10.836567 1662846 cri.go:122] skipping {fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 running}: state = "running", want "paused"
	I0817 02:44:10.836606 1662846 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 02:44:10.842672 1662846 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 02:44:10.842686 1662846 kubeadm.go:600] restartCluster start
	I0817 02:44:10.842722 1662846 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 02:44:10.848569 1662846 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:44:10.849505 1662846 kubeconfig.go:93] found "pause-20210817024148-1554185" server: "https://192.168.49.2:8443"
	I0817 02:44:10.850203 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:10.851914 1662846 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 02:44:10.861281 1662846 api_server.go:164] Checking apiserver status ...
	I0817 02:44:10.861344 1662846 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:10.872606 1662846 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1140/cgroup
	I0817 02:44:10.878948 1662846 api_server.go:180] apiserver freezer: "6:freezer:/docker/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/kubepods/burstable/poda97ec150b358b2334cd33dc4c454d661/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:10.879031 1662846 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/kubepods/burstable/poda97ec150b358b2334cd33dc4c454d661/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/freezer.state
	I0817 02:44:10.887614 1662846 api_server.go:202] freezer state: "THAWED"
	I0817 02:44:10.887653 1662846 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 02:44:10.897405 1662846 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 02:44:10.921793 1662846 system_pods.go:86] 7 kube-system pods found
	I0817 02:44:10.921826 1662846 system_pods.go:89] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:10.921833 1662846 system_pods.go:89] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:10.921837 1662846 system_pods.go:89] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:10.921846 1662846 system_pods.go:89] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:10.921851 1662846 system_pods.go:89] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:10.921860 1662846 system_pods.go:89] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:10.921864 1662846 system_pods.go:89] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:10.922656 1662846 api_server.go:139] control plane version: v1.21.3
	I0817 02:44:10.922674 1662846 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.49.2
	I0817 02:44:10.922683 1662846 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0817 02:44:10.922688 1662846 kubeadm.go:604] restartCluster took 79.997602ms
	I0817 02:44:10.922692 1662846 kubeadm.go:392] StartCluster complete in 147.020078ms
	I0817 02:44:10.922711 1662846 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:10.922795 1662846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:44:10.923814 1662846 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:10.924639 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:10.927764 1662846 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210817024148-1554185" rescaled to 1
	I0817 02:44:10.927819 1662846 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 02:44:10.929557 1662846 out.go:177] * Verifying Kubernetes components...
	I0817 02:44:10.929621 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:10.928056 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:44:10.928073 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:44:10.928083 1662846 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 02:44:10.929772 1662846 addons.go:59] Setting storage-provisioner=true in profile "pause-20210817024148-1554185"
	I0817 02:44:10.929796 1662846 addons.go:135] Setting addon storage-provisioner=true in "pause-20210817024148-1554185"
	W0817 02:44:10.929827 1662846 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:44:10.929865 1662846 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:10.930344 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:10.935094 1662846 addons.go:59] Setting default-storageclass=true in profile "pause-20210817024148-1554185"
	I0817 02:44:10.935122 1662846 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210817024148-1554185"
	I0817 02:44:10.935399 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:11.011181 1662846 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:44:11.011290 1662846 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:11.011301 1662846 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:44:11.011350 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:44:11.015016 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:11.019001 1662846 addons.go:135] Setting addon default-storageclass=true in "pause-20210817024148-1554185"
	W0817 02:44:11.019019 1662846 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:44:11.019042 1662846 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:11.019478 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:11.072649 1662846 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:11.072687 1662846 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:44:11.072739 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:44:11.092036 1662846 node_ready.go:35] waiting up to 6m0s for node "pause-20210817024148-1554185" to be "Ready" ...
	I0817 02:44:11.092329 1662846 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 02:44:11.095935 1662846 node_ready.go:49] node "pause-20210817024148-1554185" has status "Ready":"True"
	I0817 02:44:11.095950 1662846 node_ready.go:38] duration metric: took 3.885427ms waiting for node "pause-20210817024148-1554185" to be "Ready" ...
	I0817 02:44:11.095958 1662846 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:44:11.105426 1662846 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.115130 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:44:11.136809 1662846 pod_ready.go:92] pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.136824 1662846 pod_ready.go:81] duration metric: took 31.377737ms waiting for pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.136834 1662846 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.140355 1662846 pod_ready.go:92] pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.140372 1662846 pod_ready.go:81] duration metric: took 3.530681ms waiting for pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.140384 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.145229 1662846 pod_ready.go:92] pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.145269 1662846 pod_ready.go:81] duration metric: took 4.874316ms waiting for pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.145292 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.155084 1662846 pod_ready.go:92] pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.155097 1662846 pod_ready.go:81] duration metric: took 9.787982ms waiting for pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.155105 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6fvl" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.159276 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:44:11.210907 1662846 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:11.257270 1662846 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:11.502673 1662846 pod_ready.go:92] pod "kube-proxy-h6fvl" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.502728 1662846 pod_ready.go:81] duration metric: took 347.614714ms waiting for pod "kube-proxy-h6fvl" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.502752 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:07.789502 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:07.789527 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:07.789539 1660780 retry.go:31] will retry after 951.868007ms: only 1 pod(s) have shown up
	I0817 02:44:08.743829 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:08.743853 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:08.743868 1660780 retry.go:31] will retry after 1.341783893s: only 1 pod(s) have shown up
	I0817 02:44:10.088004 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:10.088035 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:10.088048 1660780 retry.go:31] will retry after 1.876813009s: only 1 pod(s) have shown up
	I0817 02:44:11.967374 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:11.967401 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:11.967413 1660780 retry.go:31] will retry after 2.6934314s: only 1 pod(s) have shown up
	I0817 02:44:11.600462 1662846 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 02:44:11.600486 1662846 addons.go:344] enableAddons completed in 672.404962ms
	I0817 02:44:11.900577 1662846 pod_ready.go:92] pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.900629 1662846 pod_ready.go:81] duration metric: took 397.857202ms waiting for pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.900649 1662846 pod_ready.go:38] duration metric: took 804.679453ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:44:11.900677 1662846 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:44:11.900739 1662846 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:11.914199 1662846 api_server.go:70] duration metric: took 986.33934ms to wait for apiserver process to appear ...
	I0817 02:44:11.914238 1662846 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:44:11.914267 1662846 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 02:44:11.922723 1662846 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 02:44:11.923486 1662846 api_server.go:139] control plane version: v1.21.3
	I0817 02:44:11.923532 1662846 api_server.go:129] duration metric: took 9.277267ms to wait for apiserver health ...
	I0817 02:44:11.923552 1662846 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:44:12.113654 1662846 system_pods.go:59] 8 kube-system pods found
	I0817 02:44:12.113686 1662846 system_pods.go:61] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:12.113692 1662846 system_pods.go:61] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:12.113696 1662846 system_pods.go:61] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:12.113701 1662846 system_pods.go:61] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:12.113735 1662846 system_pods.go:61] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:12.113747 1662846 system_pods.go:61] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:12.113754 1662846 system_pods.go:61] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:12.113767 1662846 system_pods.go:61] "storage-provisioner" [562918b9-84e2-4f7e-9a0a-70742893e39d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 02:44:12.113773 1662846 system_pods.go:74] duration metric: took 190.207086ms to wait for pod list to return data ...
	I0817 02:44:12.113797 1662846 default_sa.go:34] waiting for default service account to be created ...
	I0817 02:44:12.300796 1662846 default_sa.go:45] found service account: "default"
	I0817 02:44:12.300822 1662846 default_sa.go:55] duration metric: took 187.014117ms for default service account to be created ...
	I0817 02:44:12.300830 1662846 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 02:44:12.506751 1662846 system_pods.go:86] 8 kube-system pods found
	I0817 02:44:12.506786 1662846 system_pods.go:89] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:12.506793 1662846 system_pods.go:89] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:12.510790 1662846 system_pods.go:89] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:12.510805 1662846 system_pods.go:89] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:12.510832 1662846 system_pods.go:89] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:12.510838 1662846 system_pods.go:89] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:12.510844 1662846 system_pods.go:89] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:12.510849 1662846 system_pods.go:89] "storage-provisioner" [562918b9-84e2-4f7e-9a0a-70742893e39d] Running
	I0817 02:44:12.510855 1662846 system_pods.go:126] duration metric: took 210.020669ms to wait for k8s-apps to be running ...
	I0817 02:44:12.510862 1662846 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 02:44:12.510915 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:12.520616 1662846 system_svc.go:56] duration metric: took 9.75179ms WaitForService to wait for kubelet.
	I0817 02:44:12.520637 1662846 kubeadm.go:547] duration metric: took 1.592794882s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 02:44:12.520657 1662846 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:44:12.701568 1662846 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:44:12.701598 1662846 node_conditions.go:123] node cpu capacity is 2
	I0817 02:44:12.701610 1662846 node_conditions.go:105] duration metric: took 180.94709ms to run NodePressure ...
	I0817 02:44:12.701620 1662846 start.go:231] waiting for startup goroutines ...
	I0817 02:44:12.753251 1662846 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 02:44:12.756175 1662846 out.go:177] * Done! kubectl is now configured to use "pause-20210817024148-1554185" cluster and "default" namespace by default
	I0817 02:44:14.664339 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:14.664360 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:14.664372 1660780 retry.go:31] will retry after 2.494582248s: only 1 pod(s) have shown up
	I0817 02:44:17.162988 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:17.163020 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:17.163032 1660780 retry.go:31] will retry after 3.420895489s: only 1 pod(s) have shown up
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	cf9fe43a28990       ba04bb24b9575       7 seconds ago        Running             storage-provisioner       0                   62792bf694eb6
	6f0de758f96ce       1a1f05a2cd7c2       26 seconds ago       Running             coredns                   0                   b107ef4ef1079
	335440e08b6b6       f37b7c809e5dc       About a minute ago   Running             kindnet-cni               0                   ec238e8d3a6b2
	aad2134f4047a       4ea38350a1beb       About a minute ago   Running             kube-proxy                0                   771e9a30f4bda
	ec4892b38d019       44a6d50ef170d       About a minute ago   Running             kube-apiserver            0                   7a53464dc6cc7
	f45a4f177814d       cb310ff289d79       About a minute ago   Running             kube-controller-manager   0                   73b440ce137c2
	fb735a50aaaf9       05b738aa1bc63       About a minute ago   Running             etcd                      0                   3daddbac69e62
	63836f8fc4c5a       31a3b96cefc1e       About a minute ago   Running             kube-scheduler            0                   bb150a03bb9cc
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 02:41:51 UTC, end at Tue 2021-08-17 02:44:19 UTC. --
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204239884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204253184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204285201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204412887Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204478118Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0x40003d0f60 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPlug
inConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.4.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204553079Z" level=info msg="Connect containerd service"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204613452Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205685425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205900471Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205941102Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 17 02:43:59 pause-20210817024148-1554185 systemd[1]: Started containerd container runtime.
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.207100615Z" level=info msg="containerd successfully booted in 0.049192s"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.211900478Z" level=info msg="Start subscribing containerd event"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.220448431Z" level=info msg="Start recovering state"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299199088Z" level=info msg="Start event monitor"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299328802Z" level=info msg="Start snapshots syncer"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299384194Z" level=info msg="Start cni network conf syncer"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299434393Z" level=info msg="Start streaming server"
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.892999886Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:562918b9-84e2-4f7e-9a0a-70742893e39d,Namespace:kube-system,Attempt:0,}"
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.920384792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d pid=2435
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.993666940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:562918b9-84e2-4f7e-9a0a-70742893e39d,Namespace:kube-system,Attempt:0,} returns sandbox id \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\""
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.996018212Z" level=info msg="CreateContainer within sandbox \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.024621543Z" level=info msg="CreateContainer within sandbox \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\""
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.025144015Z" level=info msg="StartContainer for \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\""
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.093587482Z" level=info msg="StartContainer for \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\" returns successfully"
	
	* 
	* ==> coredns [6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210817024148-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-20210817024148-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=pause-20210817024148-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T02_42_50_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 02:42:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210817024148-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 02:44:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:43:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    pause-20210817024148-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                1148c453-a7b1-434d-b3fe-0e100988f0a3
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-bzchw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     76s
	  kube-system                 etcd-pause-20210817024148-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         91s
	  kube-system                 kindnet-9lnwm                                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      76s
	  kube-system                 kube-apiserver-pause-20210817024148-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-pause-20210817024148-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-h6fvl                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-scheduler-pause-20210817024148-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  102s (x5 over 102s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x5 over 102s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x4 over 102s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientPID
	  Normal  Starting                 82s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s                  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s                  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s                  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 75s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                32s                  kubelet     Node pause-20210817024148-1554185 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> etcd [fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583] <==
	* 2021-08-17 02:42:39.573167 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/17 02:42:39 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 02:42:39.573411 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 02:42:40 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 02:42:40.459131 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 02:42:40.465042 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 02:42:40.465114 I | etcdserver: published {Name:pause-20210817024148-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 02:42:40.465227 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 02:42:40.465256 I | embed: ready to serve client requests
	2021-08-17 02:42:40.469660 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 02:42:40.469974 I | embed: ready to serve client requests
	2021-08-17 02:42:40.471130 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 02:42:49.193636 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:03.920345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:08.512078 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:18.511903 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:28.515027 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:38.512484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:48.512959 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:58.512373 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:44:08.511866 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  02:44:20 up 10:26,  0 users,  load average: 2.28, 1.70, 1.25
	Linux pause-20210817024148-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2] <==
	* I0817 02:42:46.856814       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 02:42:46.869670       1 controller.go:611] quota admission added evaluator for: namespaces
	I0817 02:42:46.901749       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0817 02:42:47.639727       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 02:42:47.639906       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 02:42:47.665038       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0817 02:42:47.669713       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0817 02:42:47.669739       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 02:42:48.274054       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 02:42:48.310293       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 02:42:48.398617       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0817 02:42:48.400201       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 02:42:48.403641       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 02:42:49.313095       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 02:42:49.844292       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 02:42:49.897658       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 02:42:58.279367       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 02:43:04.136440       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0817 02:43:04.199838       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0817 02:43:21.010651       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:43:21.010876       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:43:21.010973       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:43:51.301051       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:43:51.301091       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:43:51.301099       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367] <==
	* I0817 02:43:03.483116       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0817 02:43:03.483462       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0817 02:43:03.483847       1 event.go:291] "Event occurred" object="pause-20210817024148-1554185" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210817024148-1554185 event: Registered Node pause-20210817024148-1554185 in Controller"
	I0817 02:43:03.491024       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0817 02:43:03.525270       1 shared_informer.go:247] Caches are synced for HPA 
	I0817 02:43:03.531014       1 shared_informer.go:247] Caches are synced for attach detach 
	I0817 02:43:03.531095       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0817 02:43:03.531106       1 shared_informer.go:247] Caches are synced for endpoint 
	I0817 02:43:03.542828       1 shared_informer.go:247] Caches are synced for disruption 
	I0817 02:43:03.542888       1 disruption.go:371] Sending events to api server.
	I0817 02:43:03.543000       1 shared_informer.go:247] Caches are synced for job 
	I0817 02:43:03.543063       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0817 02:43:03.593684       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:43:03.657513       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:43:04.079136       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:43:04.079314       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 02:43:04.127990       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:43:04.138879       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0817 02:43:04.216419       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h6fvl"
	I0817 02:43:04.226354       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9lnwm"
	I0817 02:43:04.391394       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0817 02:43:04.400006       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-z7v6b"
	I0817 02:43:04.411923       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-bzchw"
	I0817 02:43:04.436039       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-z7v6b"
	I0817 02:43:48.489538       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66] <==
	* I0817 02:43:05.040138       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 02:43:05.040427       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 02:43:05.040569       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 02:43:05.066321       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 02:43:05.066475       1 server_others.go:212] Using iptables Proxier.
	I0817 02:43:05.066558       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 02:43:05.066632       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 02:43:05.067006       1 server.go:643] Version: v1.21.3
	I0817 02:43:05.067885       1 config.go:315] Starting service config controller
	I0817 02:43:05.068016       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 02:43:05.068105       1 config.go:224] Starting endpoint slice config controller
	I0817 02:43:05.068187       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 02:43:05.075717       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 02:43:05.079542       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 02:43:05.169159       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 02:43:05.169216       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08] <==
	* E0817 02:42:46.829872       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:42:46.829928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0817 02:42:46.830223       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 02:42:46.830599       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:42:46.830655       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 02:42:46.830711       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:42:46.830755       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:42:46.833525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.833686       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:42:46.833809       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.834107       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 02:42:46.840341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.843238       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 02:42:47.692486       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.720037       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:42:47.720290       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 02:42:47.763006       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.788725       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.934043       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:42:47.972589       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:42:47.976875       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 02:42:47.998841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:42:48.041214       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:42:48.197544       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 02:42:49.931904       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 02:41:51 UTC, end at Tue 2021-08-17 02:44:20 UTC. --
	Aug 17 02:43:04 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:04.379642    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe-xtables-lock\") pod \"kube-proxy-h6fvl\" (UID: \"e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe\") "
	Aug 17 02:43:04 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:04.379734    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdls7\" (UniqueName: \"kubernetes.io/projected/e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe-kube-api-access-rdls7\") pod \"kube-proxy-h6fvl\" (UID: \"e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe\") "
	Aug 17 02:43:04 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:04.379824    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe-kube-proxy\") pod \"kube-proxy-h6fvl\" (UID: \"e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe\") "
	Aug 17 02:43:08 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:08.432586    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:13 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:13.433632    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:18 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:18.434613    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:23 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:23.436047    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:28 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:28.437251    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:33 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:33.438786    1188 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Aug 17 02:43:53 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:53.010477    1188 topology_manager.go:187] "Topology Admit Handler"
	Aug 17 02:43:53 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:53.042619    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpt2g\" (UniqueName: \"kubernetes.io/projected/5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc-kube-api-access-tpt2g\") pod \"coredns-558bd4d5db-bzchw\" (UID: \"5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc\") "
	Aug 17 02:43:53 pause-20210817024148-1554185 kubelet[1188]: I0817 02:43:53.042805    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc-config-volume\") pod \"coredns-558bd4d5db-bzchw\" (UID: \"5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc\") "
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: W0817 02:43:59.062156    1188 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: W0817 02:43:59.062420    1188 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: W0817 02:43:59.162663    1188 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:59.464667    1188 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="nil"
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:59.465014    1188 kuberuntime_sandbox.go:223] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 17 02:43:59 pause-20210817024148-1554185 kubelet[1188]: E0817 02:43:59.465038    1188 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	Aug 17 02:44:11 pause-20210817024148-1554185 kubelet[1188]: I0817 02:44:11.555946    1188 topology_manager.go:187] "Topology Admit Handler"
	Aug 17 02:44:11 pause-20210817024148-1554185 kubelet[1188]: I0817 02:44:11.681239    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvv4f\" (UniqueName: \"kubernetes.io/projected/562918b9-84e2-4f7e-9a0a-70742893e39d-kube-api-access-vvv4f\") pod \"storage-provisioner\" (UID: \"562918b9-84e2-4f7e-9a0a-70742893e39d\") "
	Aug 17 02:44:11 pause-20210817024148-1554185 kubelet[1188]: I0817 02:44:11.681340    1188 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/562918b9-84e2-4f7e-9a0a-70742893e39d-tmp\") pod \"storage-provisioner\" (UID: \"562918b9-84e2-4f7e-9a0a-70742893e39d\") "
	Aug 17 02:44:13 pause-20210817024148-1554185 kubelet[1188]: I0817 02:44:13.148324    1188 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 17 02:44:13 pause-20210817024148-1554185 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 17 02:44:13 pause-20210817024148-1554185 systemd[1]: kubelet.service: Succeeded.
	Aug 17 02:44:13 pause-20210817024148-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42] <==
	* I0817 02:44:12.092161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 02:44:12.117851       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 02:44:12.117933       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 02:44:12.144567       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 02:44:12.144687       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7dcc4aca-a2da-4802-9687-f8a1d81928d3", APIVersion:"v1", ResourceVersion:"515", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10 became leader
	I0817 02:44:12.145088       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10!
	I0817 02:44:12.245861       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185: exit status 2 (359.767628ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210817024148-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/VerifyStatus]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210817024148-1554185 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210817024148-1554185 describe pod : exit status 1 (105.30805ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210817024148-1554185 describe pod : exit status 1
--- FAIL: TestPause/serial/VerifyStatus (2.19s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (12.09s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-20210817024148-1554185 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-20210817024148-1554185 --alsologtostderr -v=5: exit status 80 (8.62351991s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210817024148-1554185 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 02:44:21.656715 1665390 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:44:21.656849 1665390 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:44:21.656856 1665390 out.go:311] Setting ErrFile to fd 2...
	I0817 02:44:21.656860 1665390 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:44:21.656999 1665390 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:44:21.657166 1665390 out.go:305] Setting JSON to false
	I0817 02:44:21.657190 1665390 mustload.go:65] Loading cluster: pause-20210817024148-1554185
	I0817 02:44:21.657506 1665390 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:44:21.657978 1665390 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:21.705474 1665390 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:21.706217 1665390 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210817024148-1554185 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0817 02:44:21.708863 1665390 out.go:177] * Pausing node pause-20210817024148-1554185 ... 
	I0817 02:44:21.708881 1665390 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:21.709213 1665390 ssh_runner.go:149] Run: systemctl --version
	I0817 02:44:21.709268 1665390 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:44:21.745804 1665390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:44:21.846130 1665390 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:21.854400 1665390 pause.go:50] kubelet running: true
	I0817 02:44:21.854446 1665390 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 02:44:30.074030 1665390 ssh_runner.go:189] Completed: sudo systemctl disable --now kubelet: (8.219561035s)
	I0817 02:44:30.074068 1665390 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 02:44:30.074131 1665390 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 02:44:30.142907 1665390 cri.go:76] found id: "cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42"
	I0817 02:44:30.142928 1665390 cri.go:76] found id: "6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a"
	I0817 02:44:30.142935 1665390 cri.go:76] found id: "335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0"
	I0817 02:44:30.142942 1665390 cri.go:76] found id: "aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66"
	I0817 02:44:30.142946 1665390 cri.go:76] found id: "ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:30.142955 1665390 cri.go:76] found id: "f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367"
	I0817 02:44:30.142959 1665390 cri.go:76] found id: "fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583"
	I0817 02:44:30.142963 1665390 cri.go:76] found id: "63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08"
	I0817 02:44:30.142976 1665390 cri.go:76] found id: ""
	I0817 02:44:30.143019 1665390 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:44:30.178989 1665390 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","pid":1575,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0/rootfs","created":"2021-08-17T02:43:04.897293883Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","pid":919,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60/rootfs","created":"2021-08-17T02:42:39.314001317Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210817024148-1554185_ed0234c9cb81abc8dc5bbcdfaf787883"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d","pid":2456,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d/rootfs","created":"2021-08-17T02:44:11.973325641Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea70277
5058ebdb266d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_562918b9-84e2-4f7e-9a0a-70742893e39d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","pid":1063,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08/rootfs","created":"2021-08-17T02:42:39.515581049Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","pid":1895,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f9
6ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a/rootfs","created":"2021-08-17T02:43:53.705600139Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb/rootfs","created":"2021-08-17T02:42:39.34752748Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26
e98b77ee1b2764d9241bbbb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210817024148-1554185_0718a393987d1be3d2ae606f942a3f97"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","pid":1512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa/rootfs","created":"2021-08-17T02:43:04.691568232Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-h6fvl_e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926e
c05e9","pid":991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9/rootfs","created":"2021-08-17T02:42:39.402749597Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210817024148-1554185_a97ec150b358b2334cd33dc4c454d661"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66","pid":1576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a9
21033b3042e86497dd8533ae66/rootfs","created":"2021-08-17T02:43:04.899987912Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","pid":1861,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec/rootfs","created":"2021-08-17T02:43:53.53831483Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-bzchw_5c5ce93f-6aa2-4786-b8c6-a4a48b4d9f
fc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","pid":931,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9/rootfs","created":"2021-08-17T02:42:39.343864386Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210817024148-1554185_b3d802d137ed3edc177474465272f732"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42","pid":2488,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402
115529d42","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42/rootfs","created":"2021-08-17T02:44:12.067979642Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","pid":1505,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c/rootfs","created":"2021-08-17T02:43:04.691547802Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","io.ku
bernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9lnwm_8b4c4c45-1613-49c8-9b03-e13120205af4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2","pid":1140,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/rootfs","created":"2021-08-17T02:42:39.62877376Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","pid":1098,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c92
61c63e9d4a7dc6fe54ab966a367","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367/rootfs","created":"2021-08-17T02:42:39.549869273Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","pid":1061,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583/rootfs","created":"2021-08-17T02:42:39.510915707Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3daddb
ac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60"},"owner":"root"}]
	I0817 02:44:30.179197 1665390 cri.go:113] list returned 16 containers
	I0817 02:44:30.179210 1665390 cri.go:116] container: {ID:335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 Status:running}
	I0817 02:44:30.179221 1665390 cri.go:116] container: {ID:3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 Status:running}
	I0817 02:44:30.179230 1665390 cri.go:118] skipping 3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 - not in ps
	I0817 02:44:30.179235 1665390 cri.go:116] container: {ID:62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d Status:running}
	I0817 02:44:30.179242 1665390 cri.go:118] skipping 62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d - not in ps
	I0817 02:44:30.179247 1665390 cri.go:116] container: {ID:63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 Status:running}
	I0817 02:44:30.179258 1665390 cri.go:116] container: {ID:6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a Status:running}
	I0817 02:44:30.179263 1665390 cri.go:116] container: {ID:73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb Status:running}
	I0817 02:44:30.179271 1665390 cri.go:118] skipping 73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb - not in ps
	I0817 02:44:30.179275 1665390 cri.go:116] container: {ID:771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa Status:running}
	I0817 02:44:30.179285 1665390 cri.go:118] skipping 771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa - not in ps
	I0817 02:44:30.179289 1665390 cri.go:116] container: {ID:7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 Status:running}
	I0817 02:44:30.179298 1665390 cri.go:118] skipping 7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 - not in ps
	I0817 02:44:30.179303 1665390 cri.go:116] container: {ID:aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 Status:running}
	I0817 02:44:30.179308 1665390 cri.go:116] container: {ID:b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec Status:running}
	I0817 02:44:30.179313 1665390 cri.go:118] skipping b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec - not in ps
	I0817 02:44:30.179318 1665390 cri.go:116] container: {ID:bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 Status:running}
	I0817 02:44:30.179326 1665390 cri.go:118] skipping bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 - not in ps
	I0817 02:44:30.179330 1665390 cri.go:116] container: {ID:cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42 Status:running}
	I0817 02:44:30.179342 1665390 cri.go:116] container: {ID:ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c Status:running}
	I0817 02:44:30.179348 1665390 cri.go:118] skipping ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c - not in ps
	I0817 02:44:30.179353 1665390 cri.go:116] container: {ID:ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 Status:running}
	I0817 02:44:30.179362 1665390 cri.go:116] container: {ID:f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 Status:running}
	I0817 02:44:30.179367 1665390 cri.go:116] container: {ID:fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 Status:running}
	I0817 02:44:30.179410 1665390 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0
	I0817 02:44:30.192808 1665390 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08
	I0817 02:44:30.206893 1665390 out.go:177] 
	W0817 02:44:30.207023 1665390 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T02:44:30Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T02:44:30Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0817 02:44:30.207036 1665390 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0817 02:44:30.214427 1665390 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_2.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_2.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0817 02:44:30.216109 1665390 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-20210817024148-1554185 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210817024148-1554185
helpers_test.go:236: (dbg) docker inspect pause-20210817024148-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b",
	        "Created": "2021-08-17T02:41:50.320902147Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1657229,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T02:41:51.004888651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/hosts",
	        "LogPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b-json.log",
	        "Name": "/pause-20210817024148-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-20210817024148-1554185:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210817024148-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20210817024148-1554185",
	                "Source": "/var/lib/docker/volumes/pause-20210817024148-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210817024148-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210817024148-1554185",
	                "name.minikube.sigs.k8s.io": "pause-20210817024148-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d8a8e6cd79da22a5765a51578b1ea6e8efa8e27c6c5dbb571e80d79023db3847",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50410"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50409"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50406"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50408"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50407"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d8a8e6cd79da",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210817024148-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b9b1ad2a3171",
	                        "pause-20210817024148-1554185"
	                    ],
	                    "NetworkID": "747733296a426a6f52daff293191c7fb9ea960ba5380b91809f97050286a1932",
	                    "EndpointID": "54b17c5460167eb93db2a6807c51835973c485175b87062600e587d432698b14",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185: exit status 2 (359.415565ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p pause-20210817024148-1554185 logs -n 25
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                             Args                              |                   Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210817022620-1554185 cp testdata/cp-test.txt      | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:29:49 UTC |
	|         | multinode-20210817022620-1554185-m03:/home/docker/cp-test.txt |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:29:49 UTC |
	|         | ssh -n                                                        |                                             |         |         |                               |                               |
	|         | multinode-20210817022620-1554185-m03                          |                                             |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                             |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:30:09 UTC |
	|         | node stop m03                                                 |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:30:10 UTC | Tue, 17 Aug 2021 02:30:40 UTC |
	|         | node start m03 --alsologtostderr                              |                                             |         |         |                               |                               |
	| stop    | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:30:41 UTC | Tue, 17 Aug 2021 02:31:41 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	| start   | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:31:41 UTC | Tue, 17 Aug 2021 02:34:02 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	|         | --wait=true -v=8                                              |                                             |         |         |                               |                               |
	|         | --alsologtostderr                                             |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:34:02 UTC | Tue, 17 Aug 2021 02:34:26 UTC |
	|         | node delete m03                                               |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185                              | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:34:27 UTC | Tue, 17 Aug 2021 02:35:07 UTC |
	|         | stop                                                          |                                             |         |         |                               |                               |
	| start   | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:35:07 UTC | Tue, 17 Aug 2021 02:36:46 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	|         | --wait=true -v=8                                              |                                             |         |         |                               |                               |
	|         | --alsologtostderr                                             |                                             |         |         |                               |                               |
	|         | --driver=docker                                               |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| start   | -p                                                            | multinode-20210817022620-1554185-m03        | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:36:47 UTC | Tue, 17 Aug 2021 02:37:57 UTC |
	|         | multinode-20210817022620-1554185-m03                          |                                             |         |         |                               |                               |
	|         | --driver=docker                                               |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| delete  | -p                                                            | multinode-20210817022620-1554185-m03        | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:37:57 UTC | Tue, 17 Aug 2021 02:38:00 UTC |
	|         | multinode-20210817022620-1554185-m03                          |                                             |         |         |                               |                               |
	| delete  | -p                                                            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:38:00 UTC | Tue, 17 Aug 2021 02:38:04 UTC |
	|         | multinode-20210817022620-1554185                              |                                             |         |         |                               |                               |
	| start   | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:39:35 UTC | Tue, 17 Aug 2021 02:40:42 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	|         | --memory=2048 --driver=docker                                 |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| stop    | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:40:42 UTC | Tue, 17 Aug 2021 02:40:42 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	|         | --cancel-scheduled                                            |                                             |         |         |                               |                               |
	| stop    | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:40:55 UTC | Tue, 17 Aug 2021 02:41:20 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	|         | --schedule 5s                                                 |                                             |         |         |                               |                               |
	| delete  | -p                                                            | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:20 UTC | Tue, 17 Aug 2021 02:41:25 UTC |
	|         | scheduled-stop-20210817023935-1554185                         |                                             |         |         |                               |                               |
	| delete  | -p                                                            | insufficient-storage-20210817024125-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:42 UTC | Tue, 17 Aug 2021 02:41:48 UTC |
	|         | insufficient-storage-20210817024125-1554185                   |                                             |         |         |                               |                               |
	| delete  | -p                                                            | missing-upgrade-20210817024148-1554185      | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:43:02 UTC | Tue, 17 Aug 2021 02:43:07 UTC |
	|         | missing-upgrade-20210817024148-1554185                        |                                             |         |         |                               |                               |
	| start   | -p                                                            | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:48 UTC | Tue, 17 Aug 2021 02:43:55 UTC |
	|         | pause-20210817024148-1554185                                  |                                             |         |         |                               |                               |
	|         | --memory=2048                                                 |                                             |         |         |                               |                               |
	|         | --install-addons=false                                        |                                             |         |         |                               |                               |
	|         | --wait=all --driver=docker                                    |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| start   | -p                                                            | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:43:55 UTC | Tue, 17 Aug 2021 02:44:12 UTC |
	|         | pause-20210817024148-1554185                                  |                                             |         |         |                               |                               |
	|         | --alsologtostderr                                             |                                             |         |         |                               |                               |
	|         | -v=1 --driver=docker                                          |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| -p      | pause-20210817024148-1554185                                  | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:15 UTC | Tue, 17 Aug 2021 02:44:15 UTC |
	|         | logs -n 25                                                    |                                             |         |         |                               |                               |
	| -p      | pause-20210817024148-1554185                                  | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:16 UTC | Tue, 17 Aug 2021 02:44:18 UTC |
	|         | logs -n 25                                                    |                                             |         |         |                               |                               |
	| -p      | pause-20210817024148-1554185                                  | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:19 UTC | Tue, 17 Aug 2021 02:44:20 UTC |
	|         | logs -n 25                                                    |                                             |         |         |                               |                               |
	| start   | -p                                                            | kubernetes-upgrade-20210817024307-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:43:07 UTC | Tue, 17 Aug 2021 02:44:20 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185                     |                                             |         |         |                               |                               |
	|         | --memory=2200                                                 |                                             |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                                  |                                             |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker                        |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd                                |                                             |         |         |                               |                               |
	| unpause | -p                                                            | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:21 UTC | Tue, 17 Aug 2021 02:44:21 UTC |
	|         | pause-20210817024148-1554185                                  |                                             |         |         |                               |                               |
	|         | --alsologtostderr -v=5                                        |                                             |         |         |                               |                               |
	|---------|---------------------------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 02:43:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 02:43:55.935620 1662846 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:43:55.935723 1662846 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:43:55.935738 1662846 out.go:311] Setting ErrFile to fd 2...
	I0817 02:43:55.935766 1662846 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:43:55.935946 1662846 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:43:55.936251 1662846 out.go:305] Setting JSON to false
	I0817 02:43:55.937622 1662846 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37574,"bootTime":1629130662,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 02:43:55.937719 1662846 start.go:121] virtualization:  
	I0817 02:43:55.939817 1662846 out.go:177] * [pause-20210817024148-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 02:43:55.941669 1662846 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 02:43:55.940702 1662846 notify.go:169] Checking for updates...
	I0817 02:43:55.943679 1662846 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:43:55.945437 1662846 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 02:43:55.946802 1662846 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 02:43:55.947210 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:43:55.947633 1662846 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 02:43:56.027823 1662846 docker.go:132] docker version: linux-20.10.8
	I0817 02:43:56.027923 1662846 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:43:56.176370 1662846 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-17 02:43:56.090848407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:43:56.176502 1662846 docker.go:244] overlay module found
	I0817 02:43:56.179749 1662846 out.go:177] * Using the docker driver based on existing profile
	I0817 02:43:56.179775 1662846 start.go:278] selected driver: docker
	I0817 02:43:56.179782 1662846 start.go:751] validating driver "docker" against &{Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:43:56.179866 1662846 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 02:43:56.179980 1662846 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:43:56.292126 1662846 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-17 02:43:56.216837922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:43:56.292468 1662846 cni.go:93] Creating CNI manager for ""
	I0817 02:43:56.292486 1662846 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:43:56.292501 1662846 start_flags.go:277] config:
	{Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:43:56.294520 1662846 out.go:177] * Starting control plane node pause-20210817024148-1554185 in cluster pause-20210817024148-1554185
	I0817 02:43:56.294554 1662846 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 02:43:56.296008 1662846 out.go:177] * Pulling base image ...
	I0817 02:43:56.296031 1662846 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:43:56.296059 1662846 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 02:43:56.296077 1662846 cache.go:56] Caching tarball of preloaded images
	I0817 02:43:56.296206 1662846 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 02:43:56.296231 1662846 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 02:43:56.296337 1662846 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/config.json ...
	I0817 02:43:56.296506 1662846 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 02:43:56.358839 1662846 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 02:43:56.358863 1662846 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 02:43:56.358876 1662846 cache.go:205] Successfully downloaded all kic artifacts
	I0817 02:43:56.358910 1662846 start.go:313] acquiring machines lock for pause-20210817024148-1554185: {Name:mk43ad0c6625870b459afd5900940b78473b954e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 02:43:56.358994 1662846 start.go:317] acquired machines lock for "pause-20210817024148-1554185" in 57.583µs
	I0817 02:43:56.359016 1662846 start.go:93] Skipping create...Using existing machine configuration
	I0817 02:43:56.359025 1662846 fix.go:55] fixHost starting: 
	I0817 02:43:56.359303 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:43:56.396845 1662846 fix.go:108] recreateIfNeeded on pause-20210817024148-1554185: state=Running err=<nil>
	W0817 02:43:56.396879 1662846 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 02:43:56.399120 1662846 out.go:177] * Updating the running docker "pause-20210817024148-1554185" container ...
	I0817 02:43:56.399143 1662846 machine.go:88] provisioning docker machine ...
	I0817 02:43:56.399156 1662846 ubuntu.go:169] provisioning hostname "pause-20210817024148-1554185"
	I0817 02:43:56.399223 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:56.460270 1662846 main.go:130] libmachine: Using SSH client type: native
	I0817 02:43:56.460437 1662846 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50410 <nil> <nil>}
	I0817 02:43:56.460450 1662846 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210817024148-1554185 && echo "pause-20210817024148-1554185" | sudo tee /etc/hostname
	I0817 02:43:56.592739 1662846 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210817024148-1554185
	
	I0817 02:43:56.592882 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:56.636319 1662846 main.go:130] libmachine: Using SSH client type: native
	I0817 02:43:56.636499 1662846 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50410 <nil> <nil>}
	I0817 02:43:56.636520 1662846 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210817024148-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210817024148-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210817024148-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 02:43:56.775961 1662846 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 02:43:56.775984 1662846 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 02:43:56.776018 1662846 ubuntu.go:177] setting up certificates
	I0817 02:43:56.776029 1662846 provision.go:83] configureAuth start
	I0817 02:43:56.776079 1662846 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210817024148-1554185
	I0817 02:43:56.827590 1662846 provision.go:138] copyHostCerts
	I0817 02:43:56.827646 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 02:43:56.827654 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 02:43:56.827713 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 02:43:56.827792 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 02:43:56.827799 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 02:43:56.827820 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 02:43:56.827872 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 02:43:56.827880 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 02:43:56.827900 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 02:43:56.827946 1662846 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.pause-20210817024148-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210817024148-1554185]
	I0817 02:43:57.691741 1662846 provision.go:172] copyRemoteCerts
	I0817 02:43:57.691838 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 02:43:57.691973 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:57.733998 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:57.822192 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 02:43:57.856857 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 02:43:57.883796 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 02:43:57.906578 1662846 provision.go:86] duration metric: configureAuth took 1.130540743s
	I0817 02:43:57.906595 1662846 ubuntu.go:193] setting minikube options for container-runtime
	I0817 02:43:57.906755 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:43:57.906762 1662846 machine.go:91] provisioned docker machine in 1.507614043s
	I0817 02:43:57.906767 1662846 start.go:267] post-start starting for "pause-20210817024148-1554185" (driver="docker")
	I0817 02:43:57.906773 1662846 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 02:43:57.906827 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 02:43:57.906865 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:57.948809 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.037379 1662846 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 02:43:58.040800 1662846 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 02:43:58.040823 1662846 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 02:43:58.040834 1662846 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 02:43:58.040841 1662846 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 02:43:58.040851 1662846 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 02:43:58.040896 1662846 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 02:43:58.040978 1662846 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 02:43:58.041076 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 02:43:58.047645 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:43:58.063186 1662846 start.go:270] post-start completed in 156.406695ms
	I0817 02:43:58.063236 1662846 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 02:43:58.063283 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.095312 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.179965 1662846 fix.go:57] fixHost completed within 1.82093535s
	I0817 02:43:58.179990 1662846 start.go:80] releasing machines lock for "pause-20210817024148-1554185", held for 1.820983908s
	I0817 02:43:58.180071 1662846 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210817024148-1554185
	I0817 02:43:58.213692 1662846 ssh_runner.go:149] Run: systemctl --version
	I0817 02:43:58.213738 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.213787 1662846 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 02:43:58.213879 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.294808 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.305791 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.579632 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 02:43:58.595650 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 02:43:58.606604 1662846 docker.go:153] disabling docker service ...
	I0817 02:43:58.606667 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 02:43:58.617075 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 02:43:58.626385 1662846 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 02:43:58.766845 1662846 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 02:43:58.893614 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 02:43:58.903792 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 02:43:58.915967 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 02:43:58.928706 1662846 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 02:43:58.935023 1662846 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 02:43:58.941385 1662846 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 02:43:59.052351 1662846 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 02:43:59.207573 1662846 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 02:43:59.207636 1662846 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 02:43:59.211818 1662846 start.go:413] Will wait 60s for crictl version
	I0817 02:43:59.211935 1662846 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:43:59.253079 1662846 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T02:43:59Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 02:44:02.120599 1660780 out.go:204]   - Configuring RBAC rules ...
	I0817 02:44:02.550006 1660780 cni.go:93] Creating CNI manager for ""
	I0817 02:44:02.550033 1660780 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:44:02.551994 1660780 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:44:02.552050 1660780 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:44:02.555689 1660780 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0817 02:44:02.555706 1660780 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:44:02.567554 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:44:03.257879 1660780 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:44:03.258050 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=kubernetes-upgrade-20210817024307-1554185 minikube.k8s.io/updated_at=2021_08_17T02_44_03_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:44:03.258163 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:44:03.421126 1660780 kubeadm.go:985] duration metric: took 163.181168ms to wait for elevateKubeSystemPrivileges.
	I0817 02:44:03.421159 1660780 ops.go:34] apiserver oom_adj: 16
	I0817 02:44:03.421165 1660780 ops.go:39] adjusting apiserver oom_adj to -10
	I0817 02:44:03.421175 1660780 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:44:03.435297 1660780 kubeadm.go:392] StartCluster complete in 20.942151429s
	I0817 02:44:03.435324 1660780 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:03.435396 1660780 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:44:03.436729 1660780 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:03.437591 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:03.960675 1660780 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20210817024307-1554185" rescaled to 1
	I0817 02:44:03.960735 1660780 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0817 02:44:03.962792 1660780 out.go:177] * Verifying Kubernetes components...
	I0817 02:44:03.962884 1660780 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:03.960776 1660780 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:44:03.961119 1660780 config.go:177] Loaded profile config "kubernetes-upgrade-20210817024307-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0817 02:44:03.961133 1660780 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 02:44:03.963065 1660780 addons.go:59] Setting storage-provisioner=true in profile "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963086 1660780 addons.go:135] Setting addon storage-provisioner=true in "kubernetes-upgrade-20210817024307-1554185"
	W0817 02:44:03.963092 1660780 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:44:03.963117 1660780 host.go:66] Checking if "kubernetes-upgrade-20210817024307-1554185" exists ...
	I0817 02:44:03.963609 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:03.963742 1660780 addons.go:59] Setting default-storageclass=true in profile "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963759 1660780 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963983 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:04.042068 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:04.045974 1660780 addons.go:135] Setting addon default-storageclass=true in "kubernetes-upgrade-20210817024307-1554185"
	W0817 02:44:04.045994 1660780 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:44:04.046020 1660780 host.go:66] Checking if "kubernetes-upgrade-20210817024307-1554185" exists ...
	I0817 02:44:04.046775 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:04.067540 1660780 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:44:04.067635 1660780 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:04.067644 1660780 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:44:04.067698 1660780 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817024307-1554185
	I0817 02:44:04.134515 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:04.135773 1660780 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:44:04.135809 1660780 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:04.135958 1660780 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 02:44:04.162945 1660780 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:04.162964 1660780 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:44:04.163014 1660780 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817024307-1554185
	I0817 02:44:04.169842 1660780 api_server.go:70] duration metric: took 209.062582ms to wait for apiserver process to appear ...
	I0817 02:44:04.169860 1660780 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:44:04.169869 1660780 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:44:04.202845 1660780 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0817 02:44:04.203126 1660780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50433 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210817024307-1554185/id_rsa Username:docker}
	I0817 02:44:04.205563 1660780 api_server.go:139] control plane version: v1.14.0
	I0817 02:44:04.205585 1660780 api_server.go:129] duration metric: took 35.719651ms to wait for apiserver health ...
	I0817 02:44:04.205593 1660780 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:44:04.225295 1660780 system_pods.go:59] 0 kube-system pods found
	I0817 02:44:04.225327 1660780 retry.go:31] will retry after 305.063636ms: only 0 pod(s) have shown up
	I0817 02:44:04.236904 1660780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50433 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210817024307-1554185/id_rsa Username:docker}
	I0817 02:44:04.357326 1660780 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:04.437896 1660780 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:04.532218 1660780 system_pods.go:59] 0 kube-system pods found
	I0817 02:44:04.532272 1660780 retry.go:31] will retry after 338.212508ms: only 0 pod(s) have shown up
	I0817 02:44:04.546195 1660780 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0817 02:44:04.758200 1660780 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 02:44:04.758230 1660780 addons.go:344] enableAddons completed in 797.092673ms
	I0817 02:44:04.873249 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:04.873282 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:04.873317 1660780 retry.go:31] will retry after 378.459802ms: only 1 pod(s) have shown up
	I0817 02:44:05.254768 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:05.254794 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:05.254805 1660780 retry.go:31] will retry after 469.882201ms: only 1 pod(s) have shown up
	I0817 02:44:05.727524 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:05.727553 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:05.727565 1660780 retry.go:31] will retry after 667.365439ms: only 1 pod(s) have shown up
	I0817 02:44:06.397213 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:06.397242 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:06.397268 1660780 retry.go:31] will retry after 597.243124ms: only 1 pod(s) have shown up
	I0817 02:44:06.996568 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:06.996597 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:06.996622 1660780 retry.go:31] will retry after 789.889932ms: only 1 pod(s) have shown up
	I0817 02:44:10.303855 1662846 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:44:10.329831 1662846 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 02:44:10.329883 1662846 ssh_runner.go:149] Run: containerd --version
	I0817 02:44:10.350547 1662846 ssh_runner.go:149] Run: containerd --version
	I0817 02:44:10.372464 1662846 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 02:44:10.372542 1662846 cli_runner.go:115] Run: docker network inspect pause-20210817024148-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 02:44:10.403413 1662846 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 02:44:10.406786 1662846 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:44:10.406887 1662846 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:44:10.432961 1662846 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:44:10.432979 1662846 containerd.go:517] Images already preloaded, skipping extraction
	I0817 02:44:10.433017 1662846 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:44:10.456152 1662846 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:44:10.456171 1662846 cache_images.go:74] Images are preloaded, skipping loading
	I0817 02:44:10.456212 1662846 ssh_runner.go:149] Run: sudo crictl info
	I0817 02:44:10.478011 1662846 cni.go:93] Creating CNI manager for ""
	I0817 02:44:10.478033 1662846 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:44:10.478056 1662846 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 02:44:10.478081 1662846 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210817024148-1554185 NodeName:pause-20210817024148-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 02:44:10.478244 1662846 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "pause-20210817024148-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 02:44:10.478333 1662846 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20210817024148-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 02:44:10.478387 1662846 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 02:44:10.484940 1662846 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 02:44:10.484985 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 02:44:10.490924 1662846 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (573 bytes)
	I0817 02:44:10.502660 1662846 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 02:44:10.513948 1662846 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2078 bytes)
	I0817 02:44:10.524832 1662846 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 02:44:10.527527 1662846 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185 for IP: 192.168.49.2
	I0817 02:44:10.527570 1662846 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 02:44:10.527589 1662846 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 02:44:10.527638 1662846 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.key
	I0817 02:44:10.527664 1662846 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.key.dd3b5fb2
	I0817 02:44:10.527684 1662846 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.key
	I0817 02:44:10.527782 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 02:44:10.527819 1662846 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 02:44:10.527834 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 02:44:10.527857 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 02:44:10.527884 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 02:44:10.527924 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 02:44:10.527974 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:44:10.529038 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 02:44:10.544784 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 02:44:10.559660 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 02:44:10.574785 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 02:44:10.590201 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 02:44:10.605037 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 02:44:10.623387 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 02:44:10.639037 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 02:44:10.654135 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 02:44:10.669090 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 02:44:10.684622 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 02:44:10.699670 1662846 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 02:44:10.710765 1662846 ssh_runner.go:149] Run: openssl version
	I0817 02:44:10.717019 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 02:44:10.724137 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.726900 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.726944 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.731588 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 02:44:10.737384 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 02:44:10.743565 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.746327 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.746375 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.750493 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 02:44:10.756191 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 02:44:10.762612 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.765540 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.765579 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.769923 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 02:44:10.775678 1662846 kubeadm.go:390] StartCluster: {Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:44:10.775762 1662846 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 02:44:10.775824 1662846 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:44:10.802431 1662846 cri.go:76] found id: "6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a"
	I0817 02:44:10.802447 1662846 cri.go:76] found id: "335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0"
	I0817 02:44:10.802453 1662846 cri.go:76] found id: "aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66"
	I0817 02:44:10.802457 1662846 cri.go:76] found id: "ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:10.802462 1662846 cri.go:76] found id: "f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367"
	I0817 02:44:10.802470 1662846 cri.go:76] found id: "fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583"
	I0817 02:44:10.802478 1662846 cri.go:76] found id: "63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08"
	I0817 02:44:10.802488 1662846 cri.go:76] found id: ""
	I0817 02:44:10.802522 1662846 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:44:10.836121 1662846 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","pid":1575,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0/rootfs","created":"2021-08-17T02:43:04.897293883Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","pid":919,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60/rootfs","created":"2021-08-17T02:42:39.314001317Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210817024148-1554185_ed0234c9cb81abc8dc5bbcdfaf787883"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","pid":1063,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08/rootfs","created":"2021-08-17T02:42:39.515581049Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id"
:"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","pid":1895,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a/rootfs","created":"2021-08-17T02:43:53.705600139Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","rootfs":"/run/containerd
/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb/rootfs","created":"2021-08-17T02:42:39.34752748Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210817024148-1554185_0718a393987d1be3d2ae606f942a3f97"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","pid":1512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa/rootfs","created":"2021-08-17T02:43:04.691568232Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"771e9a30f4bd
ab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-h6fvl_e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","pid":991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9/rootfs","created":"2021-08-17T02:42:39.402749597Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210817024148-1554185_a97ec150b358b2334cd33dc4c454d661"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aad2134f4047a0355037c106be1e03aab147a921
033b3042e86497dd8533ae66","pid":1576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66/rootfs","created":"2021-08-17T02:43:04.899987912Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","pid":1861,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec/rootfs","created":"2021-08-17T02:43:53.53831483Z",
"annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-bzchw_5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","pid":931,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9/rootfs","created":"2021-08-17T02:42:39.343864386Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210817024148-1554185_b3d802d137e
d3edc177474465272f732"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","pid":1505,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c/rootfs","created":"2021-08-17T02:43:04.691547802Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9lnwm_8b4c4c45-1613-49c8-9b03-e13120205af4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2","pid":1140,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c466
5f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/rootfs","created":"2021-08-17T02:42:39.62877376Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","pid":1098,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367/rootfs","created":"2021-08-17T02:42:39.549869273Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607d
f03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","pid":1061,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583/rootfs","created":"2021-08-17T02:42:39.510915707Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60"},"owner":"root"}]
	I0817 02:44:10.836356 1662846 cri.go:113] list returned 14 containers
	I0817 02:44:10.836368 1662846 cri.go:116] container: {ID:335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 Status:running}
	I0817 02:44:10.836387 1662846 cri.go:122] skipping {335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 running}: state = "running", want "paused"
	I0817 02:44:10.836402 1662846 cri.go:116] container: {ID:3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 Status:running}
	I0817 02:44:10.836408 1662846 cri.go:118] skipping 3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 - not in ps
	I0817 02:44:10.836413 1662846 cri.go:116] container: {ID:63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 Status:running}
	I0817 02:44:10.836423 1662846 cri.go:122] skipping {63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 running}: state = "running", want "paused"
	I0817 02:44:10.836429 1662846 cri.go:116] container: {ID:6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a Status:running}
	I0817 02:44:10.836439 1662846 cri.go:122] skipping {6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a running}: state = "running", want "paused"
	I0817 02:44:10.836444 1662846 cri.go:116] container: {ID:73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb Status:running}
	I0817 02:44:10.836454 1662846 cri.go:118] skipping 73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb - not in ps
	I0817 02:44:10.836458 1662846 cri.go:116] container: {ID:771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa Status:running}
	I0817 02:44:10.836463 1662846 cri.go:118] skipping 771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa - not in ps
	I0817 02:44:10.836467 1662846 cri.go:116] container: {ID:7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 Status:running}
	I0817 02:44:10.836477 1662846 cri.go:118] skipping 7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 - not in ps
	I0817 02:44:10.836481 1662846 cri.go:116] container: {ID:aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 Status:running}
	I0817 02:44:10.836493 1662846 cri.go:122] skipping {aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 running}: state = "running", want "paused"
	I0817 02:44:10.836499 1662846 cri.go:116] container: {ID:b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec Status:running}
	I0817 02:44:10.836512 1662846 cri.go:118] skipping b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec - not in ps
	I0817 02:44:10.836518 1662846 cri.go:116] container: {ID:bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 Status:running}
	I0817 02:44:10.836524 1662846 cri.go:118] skipping bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 - not in ps
	I0817 02:44:10.836528 1662846 cri.go:116] container: {ID:ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c Status:running}
	I0817 02:44:10.836533 1662846 cri.go:118] skipping ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c - not in ps
	I0817 02:44:10.836537 1662846 cri.go:116] container: {ID:ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 Status:running}
	I0817 02:44:10.836542 1662846 cri.go:122] skipping {ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 running}: state = "running", want "paused"
	I0817 02:44:10.836547 1662846 cri.go:116] container: {ID:f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 Status:running}
	I0817 02:44:10.836553 1662846 cri.go:122] skipping {f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 running}: state = "running", want "paused"
	I0817 02:44:10.836562 1662846 cri.go:116] container: {ID:fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 Status:running}
	I0817 02:44:10.836567 1662846 cri.go:122] skipping {fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 running}: state = "running", want "paused"
	I0817 02:44:10.836606 1662846 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 02:44:10.842672 1662846 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 02:44:10.842686 1662846 kubeadm.go:600] restartCluster start
	I0817 02:44:10.842722 1662846 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 02:44:10.848569 1662846 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:44:10.849505 1662846 kubeconfig.go:93] found "pause-20210817024148-1554185" server: "https://192.168.49.2:8443"
	I0817 02:44:10.850203 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:10.851914 1662846 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 02:44:10.861281 1662846 api_server.go:164] Checking apiserver status ...
	I0817 02:44:10.861344 1662846 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:10.872606 1662846 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1140/cgroup
	I0817 02:44:10.878948 1662846 api_server.go:180] apiserver freezer: "6:freezer:/docker/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/kubepods/burstable/poda97ec150b358b2334cd33dc4c454d661/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:10.879031 1662846 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/kubepods/burstable/poda97ec150b358b2334cd33dc4c454d661/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/freezer.state
	I0817 02:44:10.887614 1662846 api_server.go:202] freezer state: "THAWED"
	I0817 02:44:10.887653 1662846 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 02:44:10.897405 1662846 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 02:44:10.921793 1662846 system_pods.go:86] 7 kube-system pods found
	I0817 02:44:10.921826 1662846 system_pods.go:89] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:10.921833 1662846 system_pods.go:89] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:10.921837 1662846 system_pods.go:89] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:10.921846 1662846 system_pods.go:89] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:10.921851 1662846 system_pods.go:89] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:10.921860 1662846 system_pods.go:89] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:10.921864 1662846 system_pods.go:89] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:10.922656 1662846 api_server.go:139] control plane version: v1.21.3
	I0817 02:44:10.922674 1662846 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.49.2
	I0817 02:44:10.922683 1662846 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0817 02:44:10.922688 1662846 kubeadm.go:604] restartCluster took 79.997602ms
	I0817 02:44:10.922692 1662846 kubeadm.go:392] StartCluster complete in 147.020078ms
	I0817 02:44:10.922711 1662846 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:10.922795 1662846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:44:10.923814 1662846 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:10.924639 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:10.927764 1662846 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210817024148-1554185" rescaled to 1
	I0817 02:44:10.927819 1662846 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 02:44:10.929557 1662846 out.go:177] * Verifying Kubernetes components...
	I0817 02:44:10.929621 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:10.928056 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:44:10.928073 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:44:10.928083 1662846 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 02:44:10.929772 1662846 addons.go:59] Setting storage-provisioner=true in profile "pause-20210817024148-1554185"
	I0817 02:44:10.929796 1662846 addons.go:135] Setting addon storage-provisioner=true in "pause-20210817024148-1554185"
	W0817 02:44:10.929827 1662846 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:44:10.929865 1662846 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:10.930344 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:10.935094 1662846 addons.go:59] Setting default-storageclass=true in profile "pause-20210817024148-1554185"
	I0817 02:44:10.935122 1662846 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210817024148-1554185"
	I0817 02:44:10.935399 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:11.011181 1662846 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:44:11.011290 1662846 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:11.011301 1662846 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:44:11.011350 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:44:11.015016 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:11.019001 1662846 addons.go:135] Setting addon default-storageclass=true in "pause-20210817024148-1554185"
	W0817 02:44:11.019019 1662846 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:44:11.019042 1662846 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:11.019478 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:11.072649 1662846 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:11.072687 1662846 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:44:11.072739 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:44:11.092036 1662846 node_ready.go:35] waiting up to 6m0s for node "pause-20210817024148-1554185" to be "Ready" ...
	I0817 02:44:11.092329 1662846 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 02:44:11.095935 1662846 node_ready.go:49] node "pause-20210817024148-1554185" has status "Ready":"True"
	I0817 02:44:11.095950 1662846 node_ready.go:38] duration metric: took 3.885427ms waiting for node "pause-20210817024148-1554185" to be "Ready" ...
	I0817 02:44:11.095958 1662846 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:44:11.105426 1662846 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.115130 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:44:11.136809 1662846 pod_ready.go:92] pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.136824 1662846 pod_ready.go:81] duration metric: took 31.377737ms waiting for pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.136834 1662846 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.140355 1662846 pod_ready.go:92] pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.140372 1662846 pod_ready.go:81] duration metric: took 3.530681ms waiting for pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.140384 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.145229 1662846 pod_ready.go:92] pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.145269 1662846 pod_ready.go:81] duration metric: took 4.874316ms waiting for pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.145292 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.155084 1662846 pod_ready.go:92] pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.155097 1662846 pod_ready.go:81] duration metric: took 9.787982ms waiting for pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.155105 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6fvl" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.159276 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:44:11.210907 1662846 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:11.257270 1662846 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:11.502673 1662846 pod_ready.go:92] pod "kube-proxy-h6fvl" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.502728 1662846 pod_ready.go:81] duration metric: took 347.614714ms waiting for pod "kube-proxy-h6fvl" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.502752 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:07.789502 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:07.789527 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:07.789539 1660780 retry.go:31] will retry after 951.868007ms: only 1 pod(s) have shown up
	I0817 02:44:08.743829 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:08.743853 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:08.743868 1660780 retry.go:31] will retry after 1.341783893s: only 1 pod(s) have shown up
	I0817 02:44:10.088004 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:10.088035 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:10.088048 1660780 retry.go:31] will retry after 1.876813009s: only 1 pod(s) have shown up
	I0817 02:44:11.967374 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:11.967401 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:11.967413 1660780 retry.go:31] will retry after 2.6934314s: only 1 pod(s) have shown up
	I0817 02:44:11.600462 1662846 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 02:44:11.600486 1662846 addons.go:344] enableAddons completed in 672.404962ms
	I0817 02:44:11.900577 1662846 pod_ready.go:92] pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.900629 1662846 pod_ready.go:81] duration metric: took 397.857202ms waiting for pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.900649 1662846 pod_ready.go:38] duration metric: took 804.679453ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:44:11.900677 1662846 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:44:11.900739 1662846 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:11.914199 1662846 api_server.go:70] duration metric: took 986.33934ms to wait for apiserver process to appear ...
	I0817 02:44:11.914238 1662846 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:44:11.914267 1662846 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 02:44:11.922723 1662846 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 02:44:11.923486 1662846 api_server.go:139] control plane version: v1.21.3
	I0817 02:44:11.923532 1662846 api_server.go:129] duration metric: took 9.277267ms to wait for apiserver health ...
	I0817 02:44:11.923552 1662846 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:44:12.113654 1662846 system_pods.go:59] 8 kube-system pods found
	I0817 02:44:12.113686 1662846 system_pods.go:61] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:12.113692 1662846 system_pods.go:61] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:12.113696 1662846 system_pods.go:61] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:12.113701 1662846 system_pods.go:61] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:12.113735 1662846 system_pods.go:61] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:12.113747 1662846 system_pods.go:61] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:12.113754 1662846 system_pods.go:61] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:12.113767 1662846 system_pods.go:61] "storage-provisioner" [562918b9-84e2-4f7e-9a0a-70742893e39d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 02:44:12.113773 1662846 system_pods.go:74] duration metric: took 190.207086ms to wait for pod list to return data ...
	I0817 02:44:12.113797 1662846 default_sa.go:34] waiting for default service account to be created ...
	I0817 02:44:12.300796 1662846 default_sa.go:45] found service account: "default"
	I0817 02:44:12.300822 1662846 default_sa.go:55] duration metric: took 187.014117ms for default service account to be created ...
	I0817 02:44:12.300830 1662846 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 02:44:12.506751 1662846 system_pods.go:86] 8 kube-system pods found
	I0817 02:44:12.506786 1662846 system_pods.go:89] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:12.506793 1662846 system_pods.go:89] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:12.510790 1662846 system_pods.go:89] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:12.510805 1662846 system_pods.go:89] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:12.510832 1662846 system_pods.go:89] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:12.510838 1662846 system_pods.go:89] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:12.510844 1662846 system_pods.go:89] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:12.510849 1662846 system_pods.go:89] "storage-provisioner" [562918b9-84e2-4f7e-9a0a-70742893e39d] Running
	I0817 02:44:12.510855 1662846 system_pods.go:126] duration metric: took 210.020669ms to wait for k8s-apps to be running ...
	I0817 02:44:12.510862 1662846 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 02:44:12.510915 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:12.520616 1662846 system_svc.go:56] duration metric: took 9.75179ms WaitForService to wait for kubelet.
	I0817 02:44:12.520637 1662846 kubeadm.go:547] duration metric: took 1.592794882s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 02:44:12.520657 1662846 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:44:12.701568 1662846 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:44:12.701598 1662846 node_conditions.go:123] node cpu capacity is 2
	I0817 02:44:12.701610 1662846 node_conditions.go:105] duration metric: took 180.94709ms to run NodePressure ...
	I0817 02:44:12.701620 1662846 start.go:231] waiting for startup goroutines ...
	I0817 02:44:12.753251 1662846 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 02:44:12.756175 1662846 out.go:177] * Done! kubectl is now configured to use "pause-20210817024148-1554185" cluster and "default" namespace by default
	I0817 02:44:14.664339 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:14.664360 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:14.664372 1660780 retry.go:31] will retry after 2.494582248s: only 1 pod(s) have shown up
	I0817 02:44:17.162988 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:17.163020 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:17.163032 1660780 retry.go:31] will retry after 3.420895489s: only 1 pod(s) have shown up
	I0817 02:44:20.589159 1660780 system_pods.go:59] 4 kube-system pods found
	I0817 02:44:20.589189 1660780 system_pods.go:61] "coredns-fb8b8dccf-w9fv2" [041a64a9-ff05-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:20.589196 1660780 system_pods.go:61] "kindnet-jn94r" [0425580c-ff05-11eb-a19f-024225e4e7af] Running
	I0817 02:44:20.589201 1660780 system_pods.go:61] "kube-proxy-spnf9" [0425385c-ff05-11eb-a19f-024225e4e7af] Running
	I0817 02:44:20.589206 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:20.589212 1660780 system_pods.go:74] duration metric: took 16.383614452s to wait for pod list to return data ...
	I0817 02:44:20.589227 1660780 kubeadm.go:547] duration metric: took 16.628465985s to wait for : map[apiserver:true system_pods:true] ...
	I0817 02:44:20.589244 1660780 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:44:20.595966 1660780 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:44:20.595984 1660780 node_conditions.go:123] node cpu capacity is 2
	I0817 02:44:20.595994 1660780 node_conditions.go:105] duration metric: took 6.746092ms to run NodePressure ...
	I0817 02:44:20.596011 1660780 start.go:231] waiting for startup goroutines ...
	I0817 02:44:20.685730 1660780 start.go:462] kubectl: 1.21.3, cluster: 1.14.0 (minor skew: 7)
	I0817 02:44:20.687997 1660780 out.go:177] 
	W0817 02:44:20.688127 1660780 out.go:242] ! /usr/local/bin/kubectl is version 1.21.3, which may have incompatibilites with Kubernetes 1.14.0.
	I0817 02:44:20.689593 1660780 out.go:177]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0817 02:44:20.692233 1660780 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-20210817024307-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	cf9fe43a28990       ba04bb24b9575       19 seconds ago       Running             storage-provisioner       0                   62792bf694eb6
	6f0de758f96ce       1a1f05a2cd7c2       37 seconds ago       Running             coredns                   0                   b107ef4ef1079
	335440e08b6b6       f37b7c809e5dc       About a minute ago   Running             kindnet-cni               0                   ec238e8d3a6b2
	aad2134f4047a       4ea38350a1beb       About a minute ago   Running             kube-proxy                0                   771e9a30f4bda
	ec4892b38d019       44a6d50ef170d       About a minute ago   Running             kube-apiserver            0                   7a53464dc6cc7
	f45a4f177814d       cb310ff289d79       About a minute ago   Running             kube-controller-manager   0                   73b440ce137c2
	fb735a50aaaf9       05b738aa1bc63       About a minute ago   Running             etcd                      0                   3daddbac69e62
	63836f8fc4c5a       31a3b96cefc1e       About a minute ago   Running             kube-scheduler            0                   bb150a03bb9cc
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 02:41:51 UTC, end at Tue 2021-08-17 02:44:31 UTC. --
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204239884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204253184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204285201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204412887Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204478118Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0x40003d0f60 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPlug
inConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.4.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204553079Z" level=info msg="Connect containerd service"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204613452Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205685425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205900471Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205941102Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 17 02:43:59 pause-20210817024148-1554185 systemd[1]: Started containerd container runtime.
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.207100615Z" level=info msg="containerd successfully booted in 0.049192s"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.211900478Z" level=info msg="Start subscribing containerd event"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.220448431Z" level=info msg="Start recovering state"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299199088Z" level=info msg="Start event monitor"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299328802Z" level=info msg="Start snapshots syncer"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299384194Z" level=info msg="Start cni network conf syncer"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299434393Z" level=info msg="Start streaming server"
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.892999886Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:562918b9-84e2-4f7e-9a0a-70742893e39d,Namespace:kube-system,Attempt:0,}"
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.920384792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d pid=2435
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.993666940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:562918b9-84e2-4f7e-9a0a-70742893e39d,Namespace:kube-system,Attempt:0,} returns sandbox id \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\""
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.996018212Z" level=info msg="CreateContainer within sandbox \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.024621543Z" level=info msg="CreateContainer within sandbox \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\""
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.025144015Z" level=info msg="StartContainer for \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\""
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.093587482Z" level=info msg="StartContainer for \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\" returns successfully"
	
	* 
	* ==> coredns [6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210817024148-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-20210817024148-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=pause-20210817024148-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T02_42_50_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 02:42:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210817024148-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 02:44:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:43:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    pause-20210817024148-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                1148c453-a7b1-434d-b3fe-0e100988f0a3
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-bzchw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     87s
	  kube-system                 etcd-pause-20210817024148-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         102s
	  kube-system                 kindnet-9lnwm                                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      87s
	  kube-system                 kube-apiserver-pause-20210817024148-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-controller-manager-pause-20210817024148-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-h6fvl                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-pause-20210817024148-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  113s (x5 over 113s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x5 over 113s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x4 over 113s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientPID
	  Normal  Starting                 93s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  93s                  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s                  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s                  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  93s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 86s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                43s                  kubelet     Node pause-20210817024148-1554185 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> etcd [fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583] <==
	* 2021-08-17 02:42:39.573167 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/17 02:42:39 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 02:42:39.573411 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 02:42:40 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 02:42:40.459131 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 02:42:40.465042 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 02:42:40.465114 I | etcdserver: published {Name:pause-20210817024148-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 02:42:40.465227 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 02:42:40.465256 I | embed: ready to serve client requests
	2021-08-17 02:42:40.469660 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 02:42:40.469974 I | embed: ready to serve client requests
	2021-08-17 02:42:40.471130 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 02:42:49.193636 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:03.920345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:08.512078 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:18.511903 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:28.515027 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:38.512484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:48.512959 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:58.512373 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:44:08.511866 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  02:44:31 up 10:26,  0 users,  load average: 2.82, 1.84, 1.30
	Linux pause-20210817024148-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2] <==
	* I0817 02:42:47.639727       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 02:42:47.639906       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 02:42:47.665038       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0817 02:42:47.669713       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0817 02:42:47.669739       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 02:42:48.274054       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 02:42:48.310293       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 02:42:48.398617       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0817 02:42:48.400201       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 02:42:48.403641       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 02:42:49.313095       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 02:42:49.844292       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 02:42:49.897658       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 02:42:58.279367       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 02:43:04.136440       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0817 02:43:04.199838       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0817 02:43:21.010651       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:43:21.010876       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:43:21.010973       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:43:51.301051       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:43:51.301091       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:43:51.301099       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:44:28.852464       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:44:28.852640       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:44:28.852660       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367] <==
	* I0817 02:43:03.483116       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0817 02:43:03.483462       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0817 02:43:03.483847       1 event.go:291] "Event occurred" object="pause-20210817024148-1554185" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210817024148-1554185 event: Registered Node pause-20210817024148-1554185 in Controller"
	I0817 02:43:03.491024       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0817 02:43:03.525270       1 shared_informer.go:247] Caches are synced for HPA 
	I0817 02:43:03.531014       1 shared_informer.go:247] Caches are synced for attach detach 
	I0817 02:43:03.531095       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0817 02:43:03.531106       1 shared_informer.go:247] Caches are synced for endpoint 
	I0817 02:43:03.542828       1 shared_informer.go:247] Caches are synced for disruption 
	I0817 02:43:03.542888       1 disruption.go:371] Sending events to api server.
	I0817 02:43:03.543000       1 shared_informer.go:247] Caches are synced for job 
	I0817 02:43:03.543063       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0817 02:43:03.593684       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:43:03.657513       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:43:04.079136       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:43:04.079314       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 02:43:04.127990       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:43:04.138879       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0817 02:43:04.216419       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h6fvl"
	I0817 02:43:04.226354       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9lnwm"
	I0817 02:43:04.391394       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0817 02:43:04.400006       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-z7v6b"
	I0817 02:43:04.411923       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-bzchw"
	I0817 02:43:04.436039       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-z7v6b"
	I0817 02:43:48.489538       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66] <==
	* I0817 02:43:05.040138       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 02:43:05.040427       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 02:43:05.040569       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 02:43:05.066321       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 02:43:05.066475       1 server_others.go:212] Using iptables Proxier.
	I0817 02:43:05.066558       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 02:43:05.066632       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 02:43:05.067006       1 server.go:643] Version: v1.21.3
	I0817 02:43:05.067885       1 config.go:315] Starting service config controller
	I0817 02:43:05.068016       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 02:43:05.068105       1 config.go:224] Starting endpoint slice config controller
	I0817 02:43:05.068187       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 02:43:05.075717       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 02:43:05.079542       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 02:43:05.169159       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 02:43:05.169216       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08] <==
	* E0817 02:42:46.829872       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:42:46.829928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0817 02:42:46.830223       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 02:42:46.830599       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:42:46.830655       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 02:42:46.830711       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:42:46.830755       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:42:46.833525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.833686       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:42:46.833809       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.834107       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 02:42:46.840341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.843238       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 02:42:47.692486       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.720037       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:42:47.720290       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 02:42:47.763006       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.788725       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.934043       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:42:47.972589       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:42:47.976875       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 02:42:47.998841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:42:48.041214       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:42:48.197544       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 02:42:49.931904       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 02:41:51 UTC, end at Tue 2021-08-17 02:44:31 UTC. --
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738343    3444 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738392    3444 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738458    3444 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738472    3444 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738481    3444 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738551    3444 remote_runtime.go:62] parsed scheme: ""
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738558    3444 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738589    3444 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738597    3444 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738637    3444 remote_image.go:50] parsed scheme: ""
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738643    3444 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738650    3444 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738655    3444 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738705    3444 kubelet.go:404] "Attempting to sync node with API server"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738720    3444 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738743    3444 kubelet.go:283] "Adding apiserver pod source"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738765    3444 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738956    3444 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.762637    3444 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="1.4.9" apiVersion="v1alpha2"
	Aug 17 02:44:27 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:27.740525    3444 apiserver.go:52] "Watching apiserver"
	Aug 17 02:44:30 pause-20210817024148-1554185 kubelet[3444]: E0817 02:44:30.056391    3444 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 17 02:44:30 pause-20210817024148-1554185 kubelet[3444]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 17 02:44:30 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:30.057365    3444 server.go:1190] "Started kubelet"
	Aug 17 02:44:30 pause-20210817024148-1554185 systemd[1]: kubelet.service: Succeeded.
	Aug 17 02:44:30 pause-20210817024148-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42] <==
	* I0817 02:44:12.092161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 02:44:12.117851       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 02:44:12.117933       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 02:44:12.144567       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 02:44:12.144687       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7dcc4aca-a2da-4802-9687-f8a1d81928d3", APIVersion:"v1", ResourceVersion:"515", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10 became leader
	I0817 02:44:12.145088       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10!
	I0817 02:44:12.245861       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185: exit status 2 (319.628688ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210817024148-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210817024148-1554185 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210817024148-1554185 describe pod : exit status 1 (60.637767ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210817024148-1554185 describe pod : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210817024148-1554185
helpers_test.go:236: (dbg) docker inspect pause-20210817024148-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b",
	        "Created": "2021-08-17T02:41:50.320902147Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1657229,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T02:41:51.004888651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/hosts",
	        "LogPath": "/var/lib/docker/containers/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b-json.log",
	        "Name": "/pause-20210817024148-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-20210817024148-1554185:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210817024148-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c9705b67d935585ce6fd228c7af466dfdd783464ea25603f84fd455cbb0ea98/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20210817024148-1554185",
	                "Source": "/var/lib/docker/volumes/pause-20210817024148-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210817024148-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210817024148-1554185",
	                "name.minikube.sigs.k8s.io": "pause-20210817024148-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d8a8e6cd79da22a5765a51578b1ea6e8efa8e27c6c5dbb571e80d79023db3847",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50410"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50409"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50406"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50408"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50407"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d8a8e6cd79da",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210817024148-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b9b1ad2a3171",
	                        "pause-20210817024148-1554185"
	                    ],
	                    "NetworkID": "747733296a426a6f52daff293191c7fb9ea960ba5380b91809f97050286a1932",
	                    "EndpointID": "54b17c5460167eb93db2a6807c51835973c485175b87062600e587d432698b14",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185: exit status 2 (303.551717ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p pause-20210817024148-1554185 logs -n 25
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                    Args                     |                   Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210817022620-1554185            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:29:49 UTC |
	|         | ssh -n                                      |                                             |         |         |                               |                               |
	|         | multinode-20210817022620-1554185-m03        |                                             |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt           |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:29:49 UTC | Tue, 17 Aug 2021 02:30:09 UTC |
	|         | node stop m03                               |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:30:10 UTC | Tue, 17 Aug 2021 02:30:40 UTC |
	|         | node start m03 --alsologtostderr            |                                             |         |         |                               |                               |
	| stop    | -p                                          | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:30:41 UTC | Tue, 17 Aug 2021 02:31:41 UTC |
	|         | multinode-20210817022620-1554185            |                                             |         |         |                               |                               |
	| start   | -p                                          | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:31:41 UTC | Tue, 17 Aug 2021 02:34:02 UTC |
	|         | multinode-20210817022620-1554185            |                                             |         |         |                               |                               |
	|         | --wait=true -v=8                            |                                             |         |         |                               |                               |
	|         | --alsologtostderr                           |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:34:02 UTC | Tue, 17 Aug 2021 02:34:26 UTC |
	|         | node delete m03                             |                                             |         |         |                               |                               |
	| -p      | multinode-20210817022620-1554185            | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:34:27 UTC | Tue, 17 Aug 2021 02:35:07 UTC |
	|         | stop                                        |                                             |         |         |                               |                               |
	| start   | -p                                          | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:35:07 UTC | Tue, 17 Aug 2021 02:36:46 UTC |
	|         | multinode-20210817022620-1554185            |                                             |         |         |                               |                               |
	|         | --wait=true -v=8                            |                                             |         |         |                               |                               |
	|         | --alsologtostderr                           |                                             |         |         |                               |                               |
	|         | --driver=docker                             |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd              |                                             |         |         |                               |                               |
	| start   | -p                                          | multinode-20210817022620-1554185-m03        | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:36:47 UTC | Tue, 17 Aug 2021 02:37:57 UTC |
	|         | multinode-20210817022620-1554185-m03        |                                             |         |         |                               |                               |
	|         | --driver=docker                             |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd              |                                             |         |         |                               |                               |
	| delete  | -p                                          | multinode-20210817022620-1554185-m03        | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:37:57 UTC | Tue, 17 Aug 2021 02:38:00 UTC |
	|         | multinode-20210817022620-1554185-m03        |                                             |         |         |                               |                               |
	| delete  | -p                                          | multinode-20210817022620-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:38:00 UTC | Tue, 17 Aug 2021 02:38:04 UTC |
	|         | multinode-20210817022620-1554185            |                                             |         |         |                               |                               |
	| start   | -p                                          | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:39:35 UTC | Tue, 17 Aug 2021 02:40:42 UTC |
	|         | scheduled-stop-20210817023935-1554185       |                                             |         |         |                               |                               |
	|         | --memory=2048 --driver=docker               |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd              |                                             |         |         |                               |                               |
	| stop    | -p                                          | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:40:42 UTC | Tue, 17 Aug 2021 02:40:42 UTC |
	|         | scheduled-stop-20210817023935-1554185       |                                             |         |         |                               |                               |
	|         | --cancel-scheduled                          |                                             |         |         |                               |                               |
	| stop    | -p                                          | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:40:55 UTC | Tue, 17 Aug 2021 02:41:20 UTC |
	|         | scheduled-stop-20210817023935-1554185       |                                             |         |         |                               |                               |
	|         | --schedule 5s                               |                                             |         |         |                               |                               |
	| delete  | -p                                          | scheduled-stop-20210817023935-1554185       | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:20 UTC | Tue, 17 Aug 2021 02:41:25 UTC |
	|         | scheduled-stop-20210817023935-1554185       |                                             |         |         |                               |                               |
	| delete  | -p                                          | insufficient-storage-20210817024125-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:42 UTC | Tue, 17 Aug 2021 02:41:48 UTC |
	|         | insufficient-storage-20210817024125-1554185 |                                             |         |         |                               |                               |
	| delete  | -p                                          | missing-upgrade-20210817024148-1554185      | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:43:02 UTC | Tue, 17 Aug 2021 02:43:07 UTC |
	|         | missing-upgrade-20210817024148-1554185      |                                             |         |         |                               |                               |
	| start   | -p                                          | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:41:48 UTC | Tue, 17 Aug 2021 02:43:55 UTC |
	|         | pause-20210817024148-1554185                |                                             |         |         |                               |                               |
	|         | --memory=2048                               |                                             |         |         |                               |                               |
	|         | --install-addons=false                      |                                             |         |         |                               |                               |
	|         | --wait=all --driver=docker                  |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd              |                                             |         |         |                               |                               |
	| start   | -p                                          | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:43:55 UTC | Tue, 17 Aug 2021 02:44:12 UTC |
	|         | pause-20210817024148-1554185                |                                             |         |         |                               |                               |
	|         | --alsologtostderr                           |                                             |         |         |                               |                               |
	|         | -v=1 --driver=docker                        |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd              |                                             |         |         |                               |                               |
	| -p      | pause-20210817024148-1554185                | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:15 UTC | Tue, 17 Aug 2021 02:44:15 UTC |
	|         | logs -n 25                                  |                                             |         |         |                               |                               |
	| -p      | pause-20210817024148-1554185                | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:16 UTC | Tue, 17 Aug 2021 02:44:18 UTC |
	|         | logs -n 25                                  |                                             |         |         |                               |                               |
	| -p      | pause-20210817024148-1554185                | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:19 UTC | Tue, 17 Aug 2021 02:44:20 UTC |
	|         | logs -n 25                                  |                                             |         |         |                               |                               |
	| start   | -p                                          | kubernetes-upgrade-20210817024307-1554185   | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:43:07 UTC | Tue, 17 Aug 2021 02:44:20 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185   |                                             |         |         |                               |                               |
	|         | --memory=2200                               |                                             |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                |                                             |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker      |                                             |         |         |                               |                               |
	|         | --container-runtime=containerd              |                                             |         |         |                               |                               |
	| unpause | -p                                          | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:21 UTC | Tue, 17 Aug 2021 02:44:21 UTC |
	|         | pause-20210817024148-1554185                |                                             |         |         |                               |                               |
	|         | --alsologtostderr -v=5                      |                                             |         |         |                               |                               |
	| -p      | pause-20210817024148-1554185                | pause-20210817024148-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:30 UTC | Tue, 17 Aug 2021 02:44:31 UTC |
	|         | logs -n 25                                  |                                             |         |         |                               |                               |
	|---------|---------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 02:43:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 02:43:55.935620 1662846 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:43:55.935723 1662846 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:43:55.935738 1662846 out.go:311] Setting ErrFile to fd 2...
	I0817 02:43:55.935766 1662846 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:43:55.935946 1662846 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:43:55.936251 1662846 out.go:305] Setting JSON to false
	I0817 02:43:55.937622 1662846 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37574,"bootTime":1629130662,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 02:43:55.937719 1662846 start.go:121] virtualization:  
	I0817 02:43:55.939817 1662846 out.go:177] * [pause-20210817024148-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 02:43:55.941669 1662846 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 02:43:55.940702 1662846 notify.go:169] Checking for updates...
	I0817 02:43:55.943679 1662846 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:43:55.945437 1662846 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 02:43:55.946802 1662846 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 02:43:55.947210 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:43:55.947633 1662846 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 02:43:56.027823 1662846 docker.go:132] docker version: linux-20.10.8
	I0817 02:43:56.027923 1662846 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:43:56.176370 1662846 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-17 02:43:56.090848407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:43:56.176502 1662846 docker.go:244] overlay module found
	I0817 02:43:56.179749 1662846 out.go:177] * Using the docker driver based on existing profile
	I0817 02:43:56.179775 1662846 start.go:278] selected driver: docker
	I0817 02:43:56.179782 1662846 start.go:751] validating driver "docker" against &{Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:43:56.179866 1662846 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 02:43:56.179980 1662846 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:43:56.292126 1662846 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-17 02:43:56.216837922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:43:56.292468 1662846 cni.go:93] Creating CNI manager for ""
	I0817 02:43:56.292486 1662846 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:43:56.292501 1662846 start_flags.go:277] config:
	{Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:43:56.294520 1662846 out.go:177] * Starting control plane node pause-20210817024148-1554185 in cluster pause-20210817024148-1554185
	I0817 02:43:56.294554 1662846 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 02:43:56.296008 1662846 out.go:177] * Pulling base image ...
	I0817 02:43:56.296031 1662846 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:43:56.296059 1662846 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 02:43:56.296077 1662846 cache.go:56] Caching tarball of preloaded images
	I0817 02:43:56.296206 1662846 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 02:43:56.296231 1662846 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 02:43:56.296337 1662846 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/config.json ...
	I0817 02:43:56.296506 1662846 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 02:43:56.358839 1662846 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 02:43:56.358863 1662846 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 02:43:56.358876 1662846 cache.go:205] Successfully downloaded all kic artifacts
	I0817 02:43:56.358910 1662846 start.go:313] acquiring machines lock for pause-20210817024148-1554185: {Name:mk43ad0c6625870b459afd5900940b78473b954e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 02:43:56.358994 1662846 start.go:317] acquired machines lock for "pause-20210817024148-1554185" in 57.583µs
	I0817 02:43:56.359016 1662846 start.go:93] Skipping create...Using existing machine configuration
	I0817 02:43:56.359025 1662846 fix.go:55] fixHost starting: 
	I0817 02:43:56.359303 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:43:56.396845 1662846 fix.go:108] recreateIfNeeded on pause-20210817024148-1554185: state=Running err=<nil>
	W0817 02:43:56.396879 1662846 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 02:43:56.399120 1662846 out.go:177] * Updating the running docker "pause-20210817024148-1554185" container ...
	I0817 02:43:56.399143 1662846 machine.go:88] provisioning docker machine ...
	I0817 02:43:56.399156 1662846 ubuntu.go:169] provisioning hostname "pause-20210817024148-1554185"
	I0817 02:43:56.399223 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:56.460270 1662846 main.go:130] libmachine: Using SSH client type: native
	I0817 02:43:56.460437 1662846 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50410 <nil> <nil>}
	I0817 02:43:56.460450 1662846 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210817024148-1554185 && echo "pause-20210817024148-1554185" | sudo tee /etc/hostname
	I0817 02:43:56.592739 1662846 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210817024148-1554185
	
	I0817 02:43:56.592882 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:56.636319 1662846 main.go:130] libmachine: Using SSH client type: native
	I0817 02:43:56.636499 1662846 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50410 <nil> <nil>}
	I0817 02:43:56.636520 1662846 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210817024148-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210817024148-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210817024148-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 02:43:56.775961 1662846 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 02:43:56.775984 1662846 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 02:43:56.776018 1662846 ubuntu.go:177] setting up certificates
	I0817 02:43:56.776029 1662846 provision.go:83] configureAuth start
	I0817 02:43:56.776079 1662846 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210817024148-1554185
	I0817 02:43:56.827590 1662846 provision.go:138] copyHostCerts
	I0817 02:43:56.827646 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 02:43:56.827654 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 02:43:56.827713 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 02:43:56.827792 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 02:43:56.827799 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 02:43:56.827820 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 02:43:56.827872 1662846 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 02:43:56.827880 1662846 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 02:43:56.827900 1662846 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 02:43:56.827946 1662846 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.pause-20210817024148-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210817024148-1554185]
	I0817 02:43:57.691741 1662846 provision.go:172] copyRemoteCerts
	I0817 02:43:57.691838 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 02:43:57.691973 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:57.733998 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:57.822192 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 02:43:57.856857 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 02:43:57.883796 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 02:43:57.906578 1662846 provision.go:86] duration metric: configureAuth took 1.130540743s
	I0817 02:43:57.906595 1662846 ubuntu.go:193] setting minikube options for container-runtime
	I0817 02:43:57.906755 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:43:57.906762 1662846 machine.go:91] provisioned docker machine in 1.507614043s
	I0817 02:43:57.906767 1662846 start.go:267] post-start starting for "pause-20210817024148-1554185" (driver="docker")
	I0817 02:43:57.906773 1662846 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 02:43:57.906827 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 02:43:57.906865 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:57.948809 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.037379 1662846 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 02:43:58.040800 1662846 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 02:43:58.040823 1662846 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 02:43:58.040834 1662846 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 02:43:58.040841 1662846 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 02:43:58.040851 1662846 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 02:43:58.040896 1662846 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 02:43:58.040978 1662846 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 02:43:58.041076 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 02:43:58.047645 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:43:58.063186 1662846 start.go:270] post-start completed in 156.406695ms
	I0817 02:43:58.063236 1662846 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 02:43:58.063283 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.095312 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.179965 1662846 fix.go:57] fixHost completed within 1.82093535s
	I0817 02:43:58.179990 1662846 start.go:80] releasing machines lock for "pause-20210817024148-1554185", held for 1.820983908s
	I0817 02:43:58.180071 1662846 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210817024148-1554185
	I0817 02:43:58.213692 1662846 ssh_runner.go:149] Run: systemctl --version
	I0817 02:43:58.213738 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.213787 1662846 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 02:43:58.213879 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:43:58.294808 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.305791 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:43:58.579632 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 02:43:58.595650 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 02:43:58.606604 1662846 docker.go:153] disabling docker service ...
	I0817 02:43:58.606667 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 02:43:58.617075 1662846 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 02:43:58.626385 1662846 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 02:43:58.766845 1662846 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 02:43:58.893614 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 02:43:58.903792 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 02:43:58.915967 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 02:43:58.928706 1662846 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 02:43:58.935023 1662846 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 02:43:58.941385 1662846 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 02:43:59.052351 1662846 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 02:43:59.207573 1662846 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 02:43:59.207636 1662846 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 02:43:59.211818 1662846 start.go:413] Will wait 60s for crictl version
	I0817 02:43:59.211935 1662846 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:43:59.253079 1662846 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T02:43:59Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 02:44:02.120599 1660780 out.go:204]   - Configuring RBAC rules ...
	I0817 02:44:02.550006 1660780 cni.go:93] Creating CNI manager for ""
	I0817 02:44:02.550033 1660780 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:44:02.551994 1660780 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:44:02.552050 1660780 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:44:02.555689 1660780 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0817 02:44:02.555706 1660780 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:44:02.567554 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:44:03.257879 1660780 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:44:03.258050 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=kubernetes-upgrade-20210817024307-1554185 minikube.k8s.io/updated_at=2021_08_17T02_44_03_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:44:03.258163 1660780 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:44:03.421126 1660780 kubeadm.go:985] duration metric: took 163.181168ms to wait for elevateKubeSystemPrivileges.
	I0817 02:44:03.421159 1660780 ops.go:34] apiserver oom_adj: 16
	I0817 02:44:03.421165 1660780 ops.go:39] adjusting apiserver oom_adj to -10
	I0817 02:44:03.421175 1660780 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:44:03.435297 1660780 kubeadm.go:392] StartCluster complete in 20.942151429s
	I0817 02:44:03.435324 1660780 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:03.435396 1660780 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:44:03.436729 1660780 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:03.437591 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:03.960675 1660780 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20210817024307-1554185" rescaled to 1
	I0817 02:44:03.960735 1660780 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0817 02:44:03.962792 1660780 out.go:177] * Verifying Kubernetes components...
	I0817 02:44:03.962884 1660780 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:03.960776 1660780 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:44:03.961119 1660780 config.go:177] Loaded profile config "kubernetes-upgrade-20210817024307-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0817 02:44:03.961133 1660780 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 02:44:03.963065 1660780 addons.go:59] Setting storage-provisioner=true in profile "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963086 1660780 addons.go:135] Setting addon storage-provisioner=true in "kubernetes-upgrade-20210817024307-1554185"
	W0817 02:44:03.963092 1660780 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:44:03.963117 1660780 host.go:66] Checking if "kubernetes-upgrade-20210817024307-1554185" exists ...
	I0817 02:44:03.963609 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:03.963742 1660780 addons.go:59] Setting default-storageclass=true in profile "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963759 1660780 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20210817024307-1554185"
	I0817 02:44:03.963983 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:04.042068 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:04.045974 1660780 addons.go:135] Setting addon default-storageclass=true in "kubernetes-upgrade-20210817024307-1554185"
	W0817 02:44:04.045994 1660780 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:44:04.046020 1660780 host.go:66] Checking if "kubernetes-upgrade-20210817024307-1554185" exists ...
	I0817 02:44:04.046775 1660780 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817024307-1554185 --format={{.State.Status}}
	I0817 02:44:04.067540 1660780 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:44:04.067635 1660780 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:04.067644 1660780 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:44:04.067698 1660780 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817024307-1554185
	I0817 02:44:04.134515 1660780 kapi.go:59] client config for kubernetes-upgrade-20210817024307-1554185: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210817024307-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minik
ube/profiles/kubernetes-upgrade-20210817024307-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:04.135773 1660780 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:44:04.135809 1660780 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:04.135958 1660780 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 02:44:04.162945 1660780 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:04.162964 1660780 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:44:04.163014 1660780 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817024307-1554185
	I0817 02:44:04.169842 1660780 api_server.go:70] duration metric: took 209.062582ms to wait for apiserver process to appear ...
	I0817 02:44:04.169860 1660780 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:44:04.169869 1660780 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:44:04.202845 1660780 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0817 02:44:04.203126 1660780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50433 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210817024307-1554185/id_rsa Username:docker}
	I0817 02:44:04.205563 1660780 api_server.go:139] control plane version: v1.14.0
	I0817 02:44:04.205585 1660780 api_server.go:129] duration metric: took 35.719651ms to wait for apiserver health ...
	I0817 02:44:04.205593 1660780 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:44:04.225295 1660780 system_pods.go:59] 0 kube-system pods found
	I0817 02:44:04.225327 1660780 retry.go:31] will retry after 305.063636ms: only 0 pod(s) have shown up
	I0817 02:44:04.236904 1660780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50433 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210817024307-1554185/id_rsa Username:docker}
	I0817 02:44:04.357326 1660780 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:04.437896 1660780 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:04.532218 1660780 system_pods.go:59] 0 kube-system pods found
	I0817 02:44:04.532272 1660780 retry.go:31] will retry after 338.212508ms: only 0 pod(s) have shown up
	I0817 02:44:04.546195 1660780 start.go:728] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0817 02:44:04.758200 1660780 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 02:44:04.758230 1660780 addons.go:344] enableAddons completed in 797.092673ms
	I0817 02:44:04.873249 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:04.873282 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:04.873317 1660780 retry.go:31] will retry after 378.459802ms: only 1 pod(s) have shown up
	I0817 02:44:05.254768 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:05.254794 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:05.254805 1660780 retry.go:31] will retry after 469.882201ms: only 1 pod(s) have shown up
	I0817 02:44:05.727524 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:05.727553 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:05.727565 1660780 retry.go:31] will retry after 667.365439ms: only 1 pod(s) have shown up
	I0817 02:44:06.397213 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:06.397242 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:06.397268 1660780 retry.go:31] will retry after 597.243124ms: only 1 pod(s) have shown up
	I0817 02:44:06.996568 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:06.996597 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:06.996622 1660780 retry.go:31] will retry after 789.889932ms: only 1 pod(s) have shown up
	I0817 02:44:10.303855 1662846 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:44:10.329831 1662846 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 02:44:10.329883 1662846 ssh_runner.go:149] Run: containerd --version
	I0817 02:44:10.350547 1662846 ssh_runner.go:149] Run: containerd --version
	I0817 02:44:10.372464 1662846 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 02:44:10.372542 1662846 cli_runner.go:115] Run: docker network inspect pause-20210817024148-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 02:44:10.403413 1662846 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 02:44:10.406786 1662846 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:44:10.406887 1662846 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:44:10.432961 1662846 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:44:10.432979 1662846 containerd.go:517] Images already preloaded, skipping extraction
	I0817 02:44:10.433017 1662846 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:44:10.456152 1662846 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:44:10.456171 1662846 cache_images.go:74] Images are preloaded, skipping loading
	I0817 02:44:10.456212 1662846 ssh_runner.go:149] Run: sudo crictl info
	I0817 02:44:10.478011 1662846 cni.go:93] Creating CNI manager for ""
	I0817 02:44:10.478033 1662846 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:44:10.478056 1662846 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 02:44:10.478081 1662846 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210817024148-1554185 NodeName:pause-20210817024148-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 02:44:10.478244 1662846 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "pause-20210817024148-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 02:44:10.478333 1662846 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20210817024148-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 02:44:10.478387 1662846 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 02:44:10.484940 1662846 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 02:44:10.484985 1662846 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 02:44:10.490924 1662846 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (573 bytes)
	I0817 02:44:10.502660 1662846 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 02:44:10.513948 1662846 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2078 bytes)
	I0817 02:44:10.524832 1662846 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 02:44:10.527527 1662846 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185 for IP: 192.168.49.2
	I0817 02:44:10.527570 1662846 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 02:44:10.527589 1662846 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 02:44:10.527638 1662846 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.key
	I0817 02:44:10.527664 1662846 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.key.dd3b5fb2
	I0817 02:44:10.527684 1662846 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.key
	I0817 02:44:10.527782 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 02:44:10.527819 1662846 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 02:44:10.527834 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 02:44:10.527857 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 02:44:10.527884 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 02:44:10.527924 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 02:44:10.527974 1662846 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:44:10.529038 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 02:44:10.544784 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 02:44:10.559660 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 02:44:10.574785 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 02:44:10.590201 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 02:44:10.605037 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 02:44:10.623387 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 02:44:10.639037 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 02:44:10.654135 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 02:44:10.669090 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 02:44:10.684622 1662846 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 02:44:10.699670 1662846 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 02:44:10.710765 1662846 ssh_runner.go:149] Run: openssl version
	I0817 02:44:10.717019 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 02:44:10.724137 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.726900 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.726944 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 02:44:10.731588 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 02:44:10.737384 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 02:44:10.743565 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.746327 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.746375 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 02:44:10.750493 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 02:44:10.756191 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 02:44:10.762612 1662846 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.765540 1662846 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.765579 1662846 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:44:10.769923 1662846 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 02:44:10.775678 1662846 kubeadm.go:390] StartCluster: {Name:pause-20210817024148-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210817024148-1554185 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:44:10.775762 1662846 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 02:44:10.775824 1662846 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:44:10.802431 1662846 cri.go:76] found id: "6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a"
	I0817 02:44:10.802447 1662846 cri.go:76] found id: "335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0"
	I0817 02:44:10.802453 1662846 cri.go:76] found id: "aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66"
	I0817 02:44:10.802457 1662846 cri.go:76] found id: "ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:10.802462 1662846 cri.go:76] found id: "f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367"
	I0817 02:44:10.802470 1662846 cri.go:76] found id: "fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583"
	I0817 02:44:10.802478 1662846 cri.go:76] found id: "63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08"
	I0817 02:44:10.802488 1662846 cri.go:76] found id: ""
	I0817 02:44:10.802522 1662846 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:44:10.836121 1662846 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","pid":1575,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0/rootfs","created":"2021-08-17T02:43:04.897293883Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","pid":919,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","rootfs":"/run/containerd/io.containerd.runtime
.v2.task/k8s.io/3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60/rootfs","created":"2021-08-17T02:42:39.314001317Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210817024148-1554185_ed0234c9cb81abc8dc5bbcdfaf787883"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","pid":1063,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08/rootfs","created":"2021-08-17T02:42:39.515581049Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id"
:"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","pid":1895,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a/rootfs","created":"2021-08-17T02:43:53.705600139Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","rootfs":"/run/containerd
/io.containerd.runtime.v2.task/k8s.io/73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb/rootfs","created":"2021-08-17T02:42:39.34752748Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210817024148-1554185_0718a393987d1be3d2ae606f942a3f97"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","pid":1512,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa/rootfs","created":"2021-08-17T02:43:04.691568232Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"771e9a30f4bd
ab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-h6fvl_e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","pid":991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9/rootfs","created":"2021-08-17T02:42:39.402749597Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210817024148-1554185_a97ec150b358b2334cd33dc4c454d661"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aad2134f4047a0355037c106be1e03aab147a921
033b3042e86497dd8533ae66","pid":1576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66/rootfs","created":"2021-08-17T02:43:04.899987912Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","pid":1861,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec/rootfs","created":"2021-08-17T02:43:53.53831483Z",
"annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-bzchw_5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","pid":931,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9/rootfs","created":"2021-08-17T02:42:39.343864386Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210817024148-1554185_b3d802d137e
d3edc177474465272f732"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","pid":1505,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c/rootfs","created":"2021-08-17T02:43:04.691547802Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-9lnwm_8b4c4c45-1613-49c8-9b03-e13120205af4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2","pid":1140,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c466
5f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/rootfs","created":"2021-08-17T02:42:39.62877376Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","pid":1098,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367/rootfs","created":"2021-08-17T02:42:39.549869273Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"73b440ce137c2e2a2607d
f03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","pid":1061,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583/rootfs","created":"2021-08-17T02:42:39.510915707Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60"},"owner":"root"}]
	I0817 02:44:10.836356 1662846 cri.go:113] list returned 14 containers
	I0817 02:44:10.836368 1662846 cri.go:116] container: {ID:335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 Status:running}
	I0817 02:44:10.836387 1662846 cri.go:122] skipping {335440e08b6b64bbc1f59075eafbaa84f3628e82a529a7fb4d97bad4944a37b0 running}: state = "running", want "paused"
	I0817 02:44:10.836402 1662846 cri.go:116] container: {ID:3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 Status:running}
	I0817 02:44:10.836408 1662846 cri.go:118] skipping 3daddbac69e6201c8cf24b6db34aed45d1b656aa3aae3708180ef803e1a3cc60 - not in ps
	I0817 02:44:10.836413 1662846 cri.go:116] container: {ID:63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 Status:running}
	I0817 02:44:10.836423 1662846 cri.go:122] skipping {63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08 running}: state = "running", want "paused"
	I0817 02:44:10.836429 1662846 cri.go:116] container: {ID:6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a Status:running}
	I0817 02:44:10.836439 1662846 cri.go:122] skipping {6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a running}: state = "running", want "paused"
	I0817 02:44:10.836444 1662846 cri.go:116] container: {ID:73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb Status:running}
	I0817 02:44:10.836454 1662846 cri.go:118] skipping 73b440ce137c2e2a2607df03cc99d1a3e7f252b26e98b77ee1b2764d9241bbbb - not in ps
	I0817 02:44:10.836458 1662846 cri.go:116] container: {ID:771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa Status:running}
	I0817 02:44:10.836463 1662846 cri.go:118] skipping 771e9a30f4bdab3b5317a90f45b3af7f9ee2219a8b9ef95a867a4d0acc523aaa - not in ps
	I0817 02:44:10.836467 1662846 cri.go:116] container: {ID:7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 Status:running}
	I0817 02:44:10.836477 1662846 cri.go:118] skipping 7a53464dc6cc74fce3822e6cb4b3d7c4064de4f6026cc22b50e1499926ec05e9 - not in ps
	I0817 02:44:10.836481 1662846 cri.go:116] container: {ID:aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 Status:running}
	I0817 02:44:10.836493 1662846 cri.go:122] skipping {aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66 running}: state = "running", want "paused"
	I0817 02:44:10.836499 1662846 cri.go:116] container: {ID:b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec Status:running}
	I0817 02:44:10.836512 1662846 cri.go:118] skipping b107ef4ef107959deae2b0fd1c171d7e979e1950b61c543b0d19aaf40e3cabec - not in ps
	I0817 02:44:10.836518 1662846 cri.go:116] container: {ID:bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 Status:running}
	I0817 02:44:10.836524 1662846 cri.go:118] skipping bb150a03bb9cc50cb0b62a925c756f404ebec4ea89a605f31dd9dc096d5ffcb9 - not in ps
	I0817 02:44:10.836528 1662846 cri.go:116] container: {ID:ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c Status:running}
	I0817 02:44:10.836533 1662846 cri.go:118] skipping ec238e8d3a6b2fecdd7152db762ad6e0471771421220a3893a514b4f5c42be7c - not in ps
	I0817 02:44:10.836537 1662846 cri.go:116] container: {ID:ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 Status:running}
	I0817 02:44:10.836542 1662846 cri.go:122] skipping {ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2 running}: state = "running", want "paused"
	I0817 02:44:10.836547 1662846 cri.go:116] container: {ID:f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 Status:running}
	I0817 02:44:10.836553 1662846 cri.go:122] skipping {f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367 running}: state = "running", want "paused"
	I0817 02:44:10.836562 1662846 cri.go:116] container: {ID:fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 Status:running}
	I0817 02:44:10.836567 1662846 cri.go:122] skipping {fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583 running}: state = "running", want "paused"
	I0817 02:44:10.836606 1662846 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 02:44:10.842672 1662846 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 02:44:10.842686 1662846 kubeadm.go:600] restartCluster start
	I0817 02:44:10.842722 1662846 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 02:44:10.848569 1662846 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:44:10.849505 1662846 kubeconfig.go:93] found "pause-20210817024148-1554185" server: "https://192.168.49.2:8443"
	I0817 02:44:10.850203 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:10.851914 1662846 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 02:44:10.861281 1662846 api_server.go:164] Checking apiserver status ...
	I0817 02:44:10.861344 1662846 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:10.872606 1662846 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1140/cgroup
	I0817 02:44:10.878948 1662846 api_server.go:180] apiserver freezer: "6:freezer:/docker/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/kubepods/burstable/poda97ec150b358b2334cd33dc4c454d661/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2"
	I0817 02:44:10.879031 1662846 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/b9b1ad2a31718ce290c7e827646543048ebb8d758f9e0862a0e4f3301acdac4b/kubepods/burstable/poda97ec150b358b2334cd33dc4c454d661/ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2/freezer.state
	I0817 02:44:10.887614 1662846 api_server.go:202] freezer state: "THAWED"
	I0817 02:44:10.887653 1662846 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 02:44:10.897405 1662846 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 02:44:10.921793 1662846 system_pods.go:86] 7 kube-system pods found
	I0817 02:44:10.921826 1662846 system_pods.go:89] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:10.921833 1662846 system_pods.go:89] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:10.921837 1662846 system_pods.go:89] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:10.921846 1662846 system_pods.go:89] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:10.921851 1662846 system_pods.go:89] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:10.921860 1662846 system_pods.go:89] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:10.921864 1662846 system_pods.go:89] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:10.922656 1662846 api_server.go:139] control plane version: v1.21.3
	I0817 02:44:10.922674 1662846 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.49.2
	I0817 02:44:10.922683 1662846 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0817 02:44:10.922688 1662846 kubeadm.go:604] restartCluster took 79.997602ms
	I0817 02:44:10.922692 1662846 kubeadm.go:392] StartCluster complete in 147.020078ms
	I0817 02:44:10.922711 1662846 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:10.922795 1662846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:44:10.923814 1662846 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:44:10.924639 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:10.927764 1662846 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210817024148-1554185" rescaled to 1
	I0817 02:44:10.927819 1662846 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 02:44:10.929557 1662846 out.go:177] * Verifying Kubernetes components...
	I0817 02:44:10.929621 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:10.928056 1662846 config.go:177] Loaded profile config "pause-20210817024148-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:44:10.928073 1662846 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:44:10.928083 1662846 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 02:44:10.929772 1662846 addons.go:59] Setting storage-provisioner=true in profile "pause-20210817024148-1554185"
	I0817 02:44:10.929796 1662846 addons.go:135] Setting addon storage-provisioner=true in "pause-20210817024148-1554185"
	W0817 02:44:10.929827 1662846 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:44:10.929865 1662846 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:10.930344 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:10.935094 1662846 addons.go:59] Setting default-storageclass=true in profile "pause-20210817024148-1554185"
	I0817 02:44:10.935122 1662846 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210817024148-1554185"
	I0817 02:44:10.935399 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:11.011181 1662846 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:44:11.011290 1662846 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:11.011301 1662846 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:44:11.011350 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:44:11.015016 1662846 kapi.go:59] client config for pause-20210817024148-1554185: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210817024148-1554185/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-2021081
7024148-1554185/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11164c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 02:44:11.019001 1662846 addons.go:135] Setting addon default-storageclass=true in "pause-20210817024148-1554185"
	W0817 02:44:11.019019 1662846 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:44:11.019042 1662846 host.go:66] Checking if "pause-20210817024148-1554185" exists ...
	I0817 02:44:11.019478 1662846 cli_runner.go:115] Run: docker container inspect pause-20210817024148-1554185 --format={{.State.Status}}
	I0817 02:44:11.072649 1662846 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:11.072687 1662846 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:44:11.072739 1662846 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817024148-1554185
	I0817 02:44:11.092036 1662846 node_ready.go:35] waiting up to 6m0s for node "pause-20210817024148-1554185" to be "Ready" ...
	I0817 02:44:11.092329 1662846 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 02:44:11.095935 1662846 node_ready.go:49] node "pause-20210817024148-1554185" has status "Ready":"True"
	I0817 02:44:11.095950 1662846 node_ready.go:38] duration metric: took 3.885427ms waiting for node "pause-20210817024148-1554185" to be "Ready" ...
	I0817 02:44:11.095958 1662846 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:44:11.105426 1662846 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.115130 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:44:11.136809 1662846 pod_ready.go:92] pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.136824 1662846 pod_ready.go:81] duration metric: took 31.377737ms waiting for pod "coredns-558bd4d5db-bzchw" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.136834 1662846 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.140355 1662846 pod_ready.go:92] pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.140372 1662846 pod_ready.go:81] duration metric: took 3.530681ms waiting for pod "etcd-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.140384 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.145229 1662846 pod_ready.go:92] pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.145269 1662846 pod_ready.go:81] duration metric: took 4.874316ms waiting for pod "kube-apiserver-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.145292 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.155084 1662846 pod_ready.go:92] pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.155097 1662846 pod_ready.go:81] duration metric: took 9.787982ms waiting for pod "kube-controller-manager-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.155105 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6fvl" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.159276 1662846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50410 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210817024148-1554185/id_rsa Username:docker}
	I0817 02:44:11.210907 1662846 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:44:11.257270 1662846 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:44:11.502673 1662846 pod_ready.go:92] pod "kube-proxy-h6fvl" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.502728 1662846 pod_ready.go:81] duration metric: took 347.614714ms waiting for pod "kube-proxy-h6fvl" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.502752 1662846 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:07.789502 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:07.789527 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:07.789539 1660780 retry.go:31] will retry after 951.868007ms: only 1 pod(s) have shown up
	I0817 02:44:08.743829 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:08.743853 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:08.743868 1660780 retry.go:31] will retry after 1.341783893s: only 1 pod(s) have shown up
	I0817 02:44:10.088004 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:10.088035 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:10.088048 1660780 retry.go:31] will retry after 1.876813009s: only 1 pod(s) have shown up
	I0817 02:44:11.967374 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:11.967401 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:11.967413 1660780 retry.go:31] will retry after 2.6934314s: only 1 pod(s) have shown up
	I0817 02:44:11.600462 1662846 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 02:44:11.600486 1662846 addons.go:344] enableAddons completed in 672.404962ms
	I0817 02:44:11.900577 1662846 pod_ready.go:92] pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:44:11.900629 1662846 pod_ready.go:81] duration metric: took 397.857202ms waiting for pod "kube-scheduler-pause-20210817024148-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:44:11.900649 1662846 pod_ready.go:38] duration metric: took 804.679453ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:44:11.900677 1662846 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:44:11.900739 1662846 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:44:11.914199 1662846 api_server.go:70] duration metric: took 986.33934ms to wait for apiserver process to appear ...
	I0817 02:44:11.914238 1662846 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:44:11.914267 1662846 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 02:44:11.922723 1662846 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 02:44:11.923486 1662846 api_server.go:139] control plane version: v1.21.3
	I0817 02:44:11.923532 1662846 api_server.go:129] duration metric: took 9.277267ms to wait for apiserver health ...
	I0817 02:44:11.923552 1662846 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:44:12.113654 1662846 system_pods.go:59] 8 kube-system pods found
	I0817 02:44:12.113686 1662846 system_pods.go:61] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:12.113692 1662846 system_pods.go:61] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:12.113696 1662846 system_pods.go:61] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:12.113701 1662846 system_pods.go:61] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:12.113735 1662846 system_pods.go:61] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:12.113747 1662846 system_pods.go:61] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:12.113754 1662846 system_pods.go:61] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:12.113767 1662846 system_pods.go:61] "storage-provisioner" [562918b9-84e2-4f7e-9a0a-70742893e39d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 02:44:12.113773 1662846 system_pods.go:74] duration metric: took 190.207086ms to wait for pod list to return data ...
	I0817 02:44:12.113797 1662846 default_sa.go:34] waiting for default service account to be created ...
	I0817 02:44:12.300796 1662846 default_sa.go:45] found service account: "default"
	I0817 02:44:12.300822 1662846 default_sa.go:55] duration metric: took 187.014117ms for default service account to be created ...
	I0817 02:44:12.300830 1662846 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 02:44:12.506751 1662846 system_pods.go:86] 8 kube-system pods found
	I0817 02:44:12.506786 1662846 system_pods.go:89] "coredns-558bd4d5db-bzchw" [5c5ce93f-6aa2-4786-b8c6-a4a48b4d9ffc] Running
	I0817 02:44:12.506793 1662846 system_pods.go:89] "etcd-pause-20210817024148-1554185" [86416d8c-9cbd-4f24-b5a9-d82793f69b3c] Running
	I0817 02:44:12.510790 1662846 system_pods.go:89] "kindnet-9lnwm" [8b4c4c45-1613-49c8-9b03-e13120205af4] Running
	I0817 02:44:12.510805 1662846 system_pods.go:89] "kube-apiserver-pause-20210817024148-1554185" [c01256a4-ded2-4745-9bb6-4d7eaec2123c] Running
	I0817 02:44:12.510832 1662846 system_pods.go:89] "kube-controller-manager-pause-20210817024148-1554185" [5549f5d5-2d3e-47f9-91df-c3dd74a5b30b] Running
	I0817 02:44:12.510838 1662846 system_pods.go:89] "kube-proxy-h6fvl" [e9b6d221-5ce9-48a8-87fd-7d13f4b7eabe] Running
	I0817 02:44:12.510844 1662846 system_pods.go:89] "kube-scheduler-pause-20210817024148-1554185" [2d96f1d4-0840-4ff1-85cc-12d71e93acf0] Running
	I0817 02:44:12.510849 1662846 system_pods.go:89] "storage-provisioner" [562918b9-84e2-4f7e-9a0a-70742893e39d] Running
	I0817 02:44:12.510855 1662846 system_pods.go:126] duration metric: took 210.020669ms to wait for k8s-apps to be running ...
	I0817 02:44:12.510862 1662846 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 02:44:12.510915 1662846 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:44:12.520616 1662846 system_svc.go:56] duration metric: took 9.75179ms WaitForService to wait for kubelet.
	I0817 02:44:12.520637 1662846 kubeadm.go:547] duration metric: took 1.592794882s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 02:44:12.520657 1662846 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:44:12.701568 1662846 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:44:12.701598 1662846 node_conditions.go:123] node cpu capacity is 2
	I0817 02:44:12.701610 1662846 node_conditions.go:105] duration metric: took 180.94709ms to run NodePressure ...
	I0817 02:44:12.701620 1662846 start.go:231] waiting for startup goroutines ...
	I0817 02:44:12.753251 1662846 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 02:44:12.756175 1662846 out.go:177] * Done! kubectl is now configured to use "pause-20210817024148-1554185" cluster and "default" namespace by default
	I0817 02:44:14.664339 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:14.664360 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:14.664372 1660780 retry.go:31] will retry after 2.494582248s: only 1 pod(s) have shown up
	I0817 02:44:17.162988 1660780 system_pods.go:59] 1 kube-system pods found
	I0817 02:44:17.163020 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:17.163032 1660780 retry.go:31] will retry after 3.420895489s: only 1 pod(s) have shown up
	I0817 02:44:20.589159 1660780 system_pods.go:59] 4 kube-system pods found
	I0817 02:44:20.589189 1660780 system_pods.go:61] "coredns-fb8b8dccf-w9fv2" [041a64a9-ff05-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:20.589196 1660780 system_pods.go:61] "kindnet-jn94r" [0425580c-ff05-11eb-a19f-024225e4e7af] Running
	I0817 02:44:20.589201 1660780 system_pods.go:61] "kube-proxy-spnf9" [0425385c-ff05-11eb-a19f-024225e4e7af] Running
	I0817 02:44:20.589206 1660780 system_pods.go:61] "storage-provisioner" [fc8b4d5c-ff04-11eb-a19f-024225e4e7af] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0817 02:44:20.589212 1660780 system_pods.go:74] duration metric: took 16.383614452s to wait for pod list to return data ...
	I0817 02:44:20.589227 1660780 kubeadm.go:547] duration metric: took 16.628465985s to wait for : map[apiserver:true system_pods:true] ...
	I0817 02:44:20.589244 1660780 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:44:20.595966 1660780 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:44:20.595984 1660780 node_conditions.go:123] node cpu capacity is 2
	I0817 02:44:20.595994 1660780 node_conditions.go:105] duration metric: took 6.746092ms to run NodePressure ...
	I0817 02:44:20.596011 1660780 start.go:231] waiting for startup goroutines ...
	I0817 02:44:20.685730 1660780 start.go:462] kubectl: 1.21.3, cluster: 1.14.0 (minor skew: 7)
	I0817 02:44:20.687997 1660780 out.go:177] 
	W0817 02:44:20.688127 1660780 out.go:242] ! /usr/local/bin/kubectl is version 1.21.3, which may have incompatibilites with Kubernetes 1.14.0.
	I0817 02:44:20.689593 1660780 out.go:177]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0817 02:44:20.692233 1660780 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-20210817024307-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	cf9fe43a28990       ba04bb24b9575       20 seconds ago       Running             storage-provisioner       0                   62792bf694eb6
	6f0de758f96ce       1a1f05a2cd7c2       39 seconds ago       Running             coredns                   0                   b107ef4ef1079
	335440e08b6b6       f37b7c809e5dc       About a minute ago   Running             kindnet-cni               0                   ec238e8d3a6b2
	aad2134f4047a       4ea38350a1beb       About a minute ago   Running             kube-proxy                0                   771e9a30f4bda
	ec4892b38d019       44a6d50ef170d       About a minute ago   Running             kube-apiserver            0                   7a53464dc6cc7
	f45a4f177814d       cb310ff289d79       About a minute ago   Running             kube-controller-manager   0                   73b440ce137c2
	fb735a50aaaf9       05b738aa1bc63       About a minute ago   Running             etcd                      0                   3daddbac69e62
	63836f8fc4c5a       31a3b96cefc1e       About a minute ago   Running             kube-scheduler            0                   bb150a03bb9cc
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 02:41:51 UTC, end at Tue 2021-08-17 02:44:32 UTC. --
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204239884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204253184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204285201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204412887Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204478118Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0x40003d0f60 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPlug
inConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.4.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204553079Z" level=info msg="Connect containerd service"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.204613452Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205685425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205900471Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.205941102Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Aug 17 02:43:59 pause-20210817024148-1554185 systemd[1]: Started containerd container runtime.
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.207100615Z" level=info msg="containerd successfully booted in 0.049192s"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.211900478Z" level=info msg="Start subscribing containerd event"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.220448431Z" level=info msg="Start recovering state"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299199088Z" level=info msg="Start event monitor"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299328802Z" level=info msg="Start snapshots syncer"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299384194Z" level=info msg="Start cni network conf syncer"
	Aug 17 02:43:59 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:43:59.299434393Z" level=info msg="Start streaming server"
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.892999886Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:562918b9-84e2-4f7e-9a0a-70742893e39d,Namespace:kube-system,Attempt:0,}"
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.920384792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d pid=2435
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.993666940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:562918b9-84e2-4f7e-9a0a-70742893e39d,Namespace:kube-system,Attempt:0,} returns sandbox id \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\""
	Aug 17 02:44:11 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:11.996018212Z" level=info msg="CreateContainer within sandbox \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.024621543Z" level=info msg="CreateContainer within sandbox \"62792bf694eb6800bb10fe1ee94d49c1fa0f8e778cbfeea702775058ebdb266d\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\""
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.025144015Z" level=info msg="StartContainer for \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\""
	Aug 17 02:44:12 pause-20210817024148-1554185 containerd[2151]: time="2021-08-17T02:44:12.093587482Z" level=info msg="StartContainer for \"cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42\" returns successfully"
	
	* 
	* ==> coredns [6f0de758f96ceaeedbadfdf463e4ed8d1d8a670bd3aa0af7e0eb5081231b289a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210817024148-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-20210817024148-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=pause-20210817024148-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T02_42_50_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 02:42:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210817024148-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 02:44:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:42:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 02:43:48 +0000   Tue, 17 Aug 2021 02:43:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    pause-20210817024148-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                1148c453-a7b1-434d-b3fe-0e100988f0a3
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-bzchw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     88s
	  kube-system                 etcd-pause-20210817024148-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kindnet-9lnwm                                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      88s
	  kube-system                 kube-apiserver-pause-20210817024148-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-pause-20210817024148-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-proxy-h6fvl                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-pause-20210817024148-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  114s (x5 over 114s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x5 over 114s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x4 over 114s)  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientPID
	  Normal  Starting                 94s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  94s                  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s                  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s                  kubelet     Node pause-20210817024148-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 87s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                44s                  kubelet     Node pause-20210817024148-1554185 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> etcd [fb735a50aaaf927b46b4d2cead83ea25095749cdc1a665ca27e7deaf0801e583] <==
	* 2021-08-17 02:42:39.573167 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/17 02:42:39 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 02:42:39.573411 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 02:42:40 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 02:42:40 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 02:42:40.459131 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 02:42:40.465042 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 02:42:40.465114 I | etcdserver: published {Name:pause-20210817024148-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 02:42:40.465227 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 02:42:40.465256 I | embed: ready to serve client requests
	2021-08-17 02:42:40.469660 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 02:42:40.469974 I | embed: ready to serve client requests
	2021-08-17 02:42:40.471130 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 02:42:49.193636 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:03.920345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:08.512078 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:18.511903 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:28.515027 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:38.512484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:48.512959 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:43:58.512373 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:44:08.511866 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  02:44:33 up 10:26,  0 users,  load average: 2.67, 1.82, 1.30
	Linux pause-20210817024148-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [ec4892b38d019c87e892d480affe47c909afccb24233f873597d80fa7c4665f2] <==
	* I0817 02:42:47.639727       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 02:42:47.639906       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 02:42:47.665038       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0817 02:42:47.669713       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0817 02:42:47.669739       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 02:42:48.274054       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 02:42:48.310293       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 02:42:48.398617       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0817 02:42:48.400201       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 02:42:48.403641       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 02:42:49.313095       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 02:42:49.844292       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 02:42:49.897658       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 02:42:58.279367       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 02:43:04.136440       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0817 02:43:04.199838       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0817 02:43:21.010651       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:43:21.010876       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:43:21.010973       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:43:51.301051       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:43:51.301091       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:43:51.301099       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:44:28.852464       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:44:28.852640       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:44:28.852660       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [f45a4f177814d8e45068aee694f3492001c9261c63e9d4a7dc6fe54ab966a367] <==
	* I0817 02:43:03.483116       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0817 02:43:03.483462       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0817 02:43:03.483847       1 event.go:291] "Event occurred" object="pause-20210817024148-1554185" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210817024148-1554185 event: Registered Node pause-20210817024148-1554185 in Controller"
	I0817 02:43:03.491024       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0817 02:43:03.525270       1 shared_informer.go:247] Caches are synced for HPA 
	I0817 02:43:03.531014       1 shared_informer.go:247] Caches are synced for attach detach 
	I0817 02:43:03.531095       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0817 02:43:03.531106       1 shared_informer.go:247] Caches are synced for endpoint 
	I0817 02:43:03.542828       1 shared_informer.go:247] Caches are synced for disruption 
	I0817 02:43:03.542888       1 disruption.go:371] Sending events to api server.
	I0817 02:43:03.543000       1 shared_informer.go:247] Caches are synced for job 
	I0817 02:43:03.543063       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0817 02:43:03.593684       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:43:03.657513       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 02:43:04.079136       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:43:04.079314       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 02:43:04.127990       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 02:43:04.138879       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0817 02:43:04.216419       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h6fvl"
	I0817 02:43:04.226354       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9lnwm"
	I0817 02:43:04.391394       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0817 02:43:04.400006       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-z7v6b"
	I0817 02:43:04.411923       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-bzchw"
	I0817 02:43:04.436039       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-z7v6b"
	I0817 02:43:48.489538       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [aad2134f4047a0355037c106be1e03aab147a921033b3042e86497dd8533ae66] <==
	* I0817 02:43:05.040138       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 02:43:05.040427       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 02:43:05.040569       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 02:43:05.066321       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 02:43:05.066475       1 server_others.go:212] Using iptables Proxier.
	I0817 02:43:05.066558       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 02:43:05.066632       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 02:43:05.067006       1 server.go:643] Version: v1.21.3
	I0817 02:43:05.067885       1 config.go:315] Starting service config controller
	I0817 02:43:05.068016       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 02:43:05.068105       1 config.go:224] Starting endpoint slice config controller
	I0817 02:43:05.068187       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 02:43:05.075717       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 02:43:05.079542       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 02:43:05.169159       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 02:43:05.169216       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [63836f8fc4c5a18221be0b59259416783bb1f5996300c88e99c43411d2616d08] <==
	* E0817 02:42:46.829872       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:42:46.829928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0817 02:42:46.830223       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 02:42:46.830599       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:42:46.830655       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 02:42:46.830711       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:42:46.830755       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:42:46.833525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.833686       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:42:46.833809       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.834107       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 02:42:46.840341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:46.843238       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 02:42:47.692486       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.720037       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:42:47.720290       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 02:42:47.763006       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.788725       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:42:47.934043       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:42:47.972589       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:42:47.976875       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 02:42:47.998841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:42:48.041214       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:42:48.197544       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 02:42:49.931904       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 02:41:51 UTC, end at Tue 2021-08-17 02:44:33 UTC. --
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738343    3444 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738392    3444 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738458    3444 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738472    3444 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738481    3444 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738551    3444 remote_runtime.go:62] parsed scheme: ""
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738558    3444 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738589    3444 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738597    3444 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738637    3444 remote_image.go:50] parsed scheme: ""
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738643    3444 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738650    3444 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738655    3444 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738705    3444 kubelet.go:404] "Attempting to sync node with API server"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738720    3444 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738743    3444 kubelet.go:283] "Adding apiserver pod source"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738765    3444 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.738956    3444 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	Aug 17 02:44:26 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:26.762637    3444 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="1.4.9" apiVersion="v1alpha2"
	Aug 17 02:44:27 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:27.740525    3444 apiserver.go:52] "Watching apiserver"
	Aug 17 02:44:30 pause-20210817024148-1554185 kubelet[3444]: E0817 02:44:30.056391    3444 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 17 02:44:30 pause-20210817024148-1554185 kubelet[3444]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 17 02:44:30 pause-20210817024148-1554185 kubelet[3444]: I0817 02:44:30.057365    3444 server.go:1190] "Started kubelet"
	Aug 17 02:44:30 pause-20210817024148-1554185 systemd[1]: kubelet.service: Succeeded.
	Aug 17 02:44:30 pause-20210817024148-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [cf9fe43a28990f2f85538f39200aa1b92f2794446076fed546aa402115529d42] <==
	* I0817 02:44:12.092161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 02:44:12.117851       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 02:44:12.117933       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 02:44:12.144567       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 02:44:12.144687       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7dcc4aca-a2da-4802-9687-f8a1d81928d3", APIVersion:"v1", ResourceVersion:"515", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10 became leader
	I0817 02:44:12.145088       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10!
	I0817 02:44:12.245861       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210817024148-1554185_3db8a471-2f49-4034-9958-cc7d2cffae10!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210817024148-1554185 -n pause-20210817024148-1554185: exit status 2 (324.191769ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210817024148-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context pause-20210817024148-1554185 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210817024148-1554185 describe pod : exit status 1 (67.042168ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context pause-20210817024148-1554185 describe pod : exit status 1
--- FAIL: TestPause/serial/PauseAgain (12.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (704.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-20210817024805-1554185 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-20210817024805-1554185 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0: exit status 109 (11m42.8674619s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20210817024805-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node old-k8s-version-20210817024805-1554185 in cluster old-k8s-version-20210817024805-1554185
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20210817024805-1554185" ...
	* Preparing Kubernetes v1.14.0 on containerd 1.4.9 ...
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Aug 17 03:02:27 old-k8s-version-20210817024805-1554185 kubelet[14430]: F0817 03:02:27.532257   14430 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	  Aug 17 03:02:28 old-k8s-version-20210817024805-1554185 kubelet[14458]: F0817 03:02:28.447274   14458 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	  Aug 17 03:02:29 old-k8s-version-20210817024805-1554185 kubelet[14486]: F0817 03:02:29.466067   14486 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 02:50:50.575780 1683677 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:50:50.576035 1683677 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:50:50.576615 1683677 out.go:311] Setting ErrFile to fd 2...
	I0817 02:50:50.576638 1683677 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:50:50.576825 1683677 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:50:50.577123 1683677 out.go:305] Setting JSON to false
	I0817 02:50:50.578120 1683677 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37989,"bootTime":1629130662,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 02:50:50.578224 1683677 start.go:121] virtualization:  
	I0817 02:50:50.580433 1683677 out.go:177] * [old-k8s-version-20210817024805-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 02:50:50.582245 1683677 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 02:50:50.581418 1683677 notify.go:169] Checking for updates...
	I0817 02:50:50.583735 1683677 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:50:50.585414 1683677 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 02:50:50.587033 1683677 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 02:50:50.587428 1683677 config.go:177] Loaded profile config "old-k8s-version-20210817024805-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0817 02:50:50.589747 1683677 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0817 02:50:50.589776 1683677 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 02:50:50.631118 1683677 docker.go:132] docker version: linux-20.10.8
	I0817 02:50:50.631196 1683677 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:50:50.734573 1683677 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 02:50:50.680373986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:50:50.734673 1683677 docker.go:244] overlay module found
	I0817 02:50:50.736675 1683677 out.go:177] * Using the docker driver based on existing profile
	I0817 02:50:50.736696 1683677 start.go:278] selected driver: docker
	I0817 02:50:50.736703 1683677 start.go:751] validating driver "docker" against &{Name:old-k8s-version-20210817024805-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210817024805-1554185 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHos
tTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:50:50.736801 1683677 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 02:50:50.736839 1683677 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 02:50:50.736857 1683677 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0817 02:50:50.738749 1683677 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 02:50:50.739105 1683677 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:50:50.821554 1683677 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 02:50:50.767699239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	W0817 02:50:50.821678 1683677 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 02:50:50.821699 1683677 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0817 02:50:50.823531 1683677 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 02:50:50.823619 1683677 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 02:50:50.823643 1683677 cni.go:93] Creating CNI manager for ""
	I0817 02:50:50.823650 1683677 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:50:50.823659 1683677 start_flags.go:277] config:
	{Name:old-k8s-version-20210817024805-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210817024805-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiN
odeRequested:false ExtraDisks:0}
	I0817 02:50:50.825508 1683677 out.go:177] * Starting control plane node old-k8s-version-20210817024805-1554185 in cluster old-k8s-version-20210817024805-1554185
	I0817 02:50:50.825535 1683677 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 02:50:50.826960 1683677 out.go:177] * Pulling base image ...
	I0817 02:50:50.826982 1683677 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0817 02:50:50.827014 1683677 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4
	I0817 02:50:50.827101 1683677 cache.go:56] Caching tarball of preloaded images
	I0817 02:50:50.827079 1683677 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 02:50:50.827263 1683677 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 02:50:50.827277 1683677 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on containerd
	I0817 02:50:50.827392 1683677 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/config.json ...
	I0817 02:50:50.885100 1683677 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 02:50:50.885130 1683677 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 02:50:50.885145 1683677 cache.go:205] Successfully downloaded all kic artifacts
	I0817 02:50:50.885183 1683677 start.go:313] acquiring machines lock for old-k8s-version-20210817024805-1554185: {Name:mkac59363cec16b6893e94302a643c00dfcfd78e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 02:50:50.885264 1683677 start.go:317] acquired machines lock for "old-k8s-version-20210817024805-1554185" in 57.468µs
	I0817 02:50:50.885288 1683677 start.go:93] Skipping create...Using existing machine configuration
	I0817 02:50:50.885297 1683677 fix.go:55] fixHost starting: 
	I0817 02:50:50.885615 1683677 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210817024805-1554185 --format={{.State.Status}}
	I0817 02:50:50.915426 1683677 fix.go:108] recreateIfNeeded on old-k8s-version-20210817024805-1554185: state=Stopped err=<nil>
	W0817 02:50:50.915457 1683677 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 02:50:50.917306 1683677 out.go:177] * Restarting existing docker container for "old-k8s-version-20210817024805-1554185" ...
	I0817 02:50:50.917359 1683677 cli_runner.go:115] Run: docker start old-k8s-version-20210817024805-1554185
	I0817 02:50:51.269170 1683677 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210817024805-1554185 --format={{.State.Status}}
	I0817 02:50:51.306890 1683677 kic.go:420] container "old-k8s-version-20210817024805-1554185" state is running.
	I0817 02:50:51.307239 1683677 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210817024805-1554185
	I0817 02:50:51.338367 1683677 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/config.json ...
	I0817 02:50:51.338592 1683677 machine.go:88] provisioning docker machine ...
	I0817 02:50:51.338619 1683677 ubuntu.go:169] provisioning hostname "old-k8s-version-20210817024805-1554185"
	I0817 02:50:51.338666 1683677 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210817024805-1554185
	I0817 02:50:51.384875 1683677 main.go:130] libmachine: Using SSH client type: native
	I0817 02:50:51.385042 1683677 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50468 <nil> <nil>}
	I0817 02:50:51.385062 1683677 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20210817024805-1554185 && echo "old-k8s-version-20210817024805-1554185" | sudo tee /etc/hostname
	I0817 02:50:51.385576 1683677 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48648->127.0.0.1:50468: read: connection reset by peer
	I0817 02:50:54.509591 1683677 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210817024805-1554185
	
	I0817 02:50:54.509671 1683677 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210817024805-1554185
	I0817 02:50:54.541627 1683677 main.go:130] libmachine: Using SSH client type: native
	I0817 02:50:54.541786 1683677 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50468 <nil> <nil>}
	I0817 02:50:54.541814 1683677 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20210817024805-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210817024805-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20210817024805-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 02:50:54.657906 1683677 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 02:50:54.657926 1683677 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 02:50:54.657958 1683677 ubuntu.go:177] setting up certificates
	I0817 02:50:54.657966 1683677 provision.go:83] configureAuth start
	I0817 02:50:54.658023 1683677 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210817024805-1554185
	I0817 02:50:54.689908 1683677 provision.go:138] copyHostCerts
	I0817 02:50:54.689959 1683677 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 02:50:54.689970 1683677 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 02:50:54.690031 1683677 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 02:50:54.690116 1683677 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 02:50:54.690127 1683677 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 02:50:54.690151 1683677 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 02:50:54.690206 1683677 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 02:50:54.690216 1683677 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 02:50:54.690240 1683677 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 02:50:54.690287 1683677 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20210817024805-1554185 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210817024805-1554185]
	I0817 02:50:55.244366 1683677 provision.go:172] copyRemoteCerts
	I0817 02:50:55.244478 1683677 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 02:50:55.244542 1683677 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210817024805-1554185
	I0817 02:50:55.284815 1683677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50468 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210817024805-1554185/id_rsa Username:docker}
	I0817 02:50:55.373541 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 02:50:55.392017 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0817 02:50:55.408985 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 02:50:55.427729 1683677 provision.go:86] duration metric: configureAuth took 769.752712ms
	I0817 02:50:55.427744 1683677 ubuntu.go:193] setting minikube options for container-runtime
	I0817 02:50:55.427908 1683677 config.go:177] Loaded profile config "old-k8s-version-20210817024805-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0817 02:50:55.427915 1683677 machine.go:91] provisioned docker machine in 4.089306651s
	I0817 02:50:55.427922 1683677 start.go:267] post-start starting for "old-k8s-version-20210817024805-1554185" (driver="docker")
	I0817 02:50:55.427928 1683677 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 02:50:55.427969 1683677 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 02:50:55.428002 1683677 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210817024805-1554185
	I0817 02:50:55.462524 1683677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50468 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210817024805-1554185/id_rsa Username:docker}
	I0817 02:50:55.555127 1683677 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 02:50:55.558343 1683677 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 02:50:55.558362 1683677 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 02:50:55.558373 1683677 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 02:50:55.558379 1683677 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 02:50:55.558387 1683677 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 02:50:55.558439 1683677 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 02:50:55.558514 1683677 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 02:50:55.558597 1683677 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 02:50:55.566055 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:50:55.587656 1683677 start.go:270] post-start completed in 159.722217ms
	I0817 02:50:55.587707 1683677 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 02:50:55.587781 1683677 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210817024805-1554185
	I0817 02:50:55.622375 1683677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50468 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210817024805-1554185/id_rsa Username:docker}
	I0817 02:50:55.702357 1683677 fix.go:57] fixHost completed within 4.817054657s
	I0817 02:50:55.702403 1683677 start.go:80] releasing machines lock for "old-k8s-version-20210817024805-1554185", held for 4.817124531s
	I0817 02:50:55.702490 1683677 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210817024805-1554185
	I0817 02:50:55.736379 1683677 ssh_runner.go:149] Run: systemctl --version
	I0817 02:50:55.736428 1683677 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 02:50:55.736482 1683677 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210817024805-1554185
	I0817 02:50:55.736485 1683677 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210817024805-1554185
	I0817 02:50:55.770689 1683677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50468 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210817024805-1554185/id_rsa Username:docker}
	I0817 02:50:55.791254 1683677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50468 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210817024805-1554185/id_rsa Username:docker}
	I0817 02:50:55.854308 1683677 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 02:50:56.021961 1683677 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 02:50:56.031566 1683677 docker.go:153] disabling docker service ...
	I0817 02:50:56.031613 1683677 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 02:50:56.041989 1683677 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 02:50:56.055901 1683677 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 02:50:56.140196 1683677 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 02:50:56.210388 1683677 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 02:50:56.218726 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 02:50:56.229501 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMSIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CgoJW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQ
uZ3JwYy52MS5jcmkiXQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lc10KICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmMub3B0aW9uc10KICAgICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZF0KICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGN
vbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy5kaWZmLXNlcnZpY2VdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 02:50:56.241970 1683677 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 02:50:56.247940 1683677 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 02:50:56.253589 1683677 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 02:50:56.323367 1683677 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 02:50:56.422806 1683677 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 02:50:56.422883 1683677 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 02:50:56.426382 1683677 start.go:413] Will wait 60s for crictl version
	I0817 02:50:56.426428 1683677 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:50:56.455075 1683677 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T02:50:56Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 02:51:07.503158 1683677 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:51:07.528346 1683677 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 02:51:07.528395 1683677 ssh_runner.go:149] Run: containerd --version
	I0817 02:51:07.550208 1683677 ssh_runner.go:149] Run: containerd --version
	I0817 02:51:07.573406 1683677 out.go:177] * Preparing Kubernetes v1.14.0 on containerd 1.4.9 ...
	I0817 02:51:07.573484 1683677 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210817024805-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 02:51:07.603956 1683677 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0817 02:51:07.606637 1683677 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 02:51:07.614734 1683677 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0817 02:51:07.614791 1683677 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:51:07.642698 1683677 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:51:07.642717 1683677 containerd.go:517] Images already preloaded, skipping extraction
	I0817 02:51:07.642755 1683677 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:51:07.668773 1683677 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:51:07.668792 1683677 cache_images.go:74] Images are preloaded, skipping loading
	I0817 02:51:07.668852 1683677 ssh_runner.go:149] Run: sudo crictl info
	I0817 02:51:07.691027 1683677 cni.go:93] Creating CNI manager for ""
	I0817 02:51:07.691048 1683677 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:51:07.691058 1683677 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 02:51:07.691074 1683677 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210817024805-1554185 NodeName:old-k8s-version-20210817024805-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cg
roupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 02:51:07.691205 1683677 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20210817024805-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210817024805-1554185
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 02:51:07.691296 1683677 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20210817024805-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210817024805-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 02:51:07.691354 1683677 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0817 02:51:07.697253 1683677 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 02:51:07.697329 1683677 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 02:51:07.703170 1683677 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (655 bytes)
	I0817 02:51:07.714779 1683677 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 02:51:07.726294 1683677 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0817 02:51:07.737639 1683677 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0817 02:51:07.740430 1683677 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 02:51:07.748351 1683677 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185 for IP: 192.168.58.2
	I0817 02:51:07.748391 1683677 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 02:51:07.748422 1683677 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 02:51:07.748478 1683677 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.key
	I0817 02:51:07.748504 1683677 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/apiserver.key.cee25041
	I0817 02:51:07.748524 1683677 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/proxy-client.key
	I0817 02:51:07.748617 1683677 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 02:51:07.748658 1683677 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 02:51:07.748671 1683677 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 02:51:07.748695 1683677 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 02:51:07.748723 1683677 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 02:51:07.748759 1683677 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 02:51:07.748806 1683677 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:51:07.749850 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 02:51:07.765239 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 02:51:07.780525 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 02:51:07.796350 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 02:51:07.811483 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 02:51:07.826019 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 02:51:07.840844 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 02:51:07.856092 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 02:51:07.871398 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 02:51:07.886173 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 02:51:07.900937 1683677 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 02:51:07.915328 1683677 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 02:51:07.926287 1683677 ssh_runner.go:149] Run: openssl version
	I0817 02:51:07.930593 1683677 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 02:51:07.936876 1683677 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:51:07.939550 1683677 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:51:07.939609 1683677 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:51:07.943784 1683677 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 02:51:07.949528 1683677 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 02:51:07.955802 1683677 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 02:51:07.958405 1683677 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 02:51:07.958466 1683677 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 02:51:07.962715 1683677 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 02:51:07.968516 1683677 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 02:51:07.974737 1683677 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 02:51:07.977298 1683677 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 02:51:07.977337 1683677 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 02:51:07.981673 1683677 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 02:51:07.987359 1683677 kubeadm.go:390] StartCluster: {Name:old-k8s-version-20210817024805-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210817024805-1554185 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:51:07.987448 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 02:51:07.987486 1683677 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:51:08.011662 1683677 cri.go:76] found id: "0df39481530040e83d9382b05122709e89ff12a92da7e1353d5cc58d9e8fe7d1"
	I0817 02:51:08.011705 1683677 cri.go:76] found id: "8ecac5e4c8333e56f5d3388cb3cc94457b9a32adccbbdc1c519ff01e6c1dde98"
	I0817 02:51:08.011718 1683677 cri.go:76] found id: "fe2df3e7895de9ae44d0b5257823710fe9ddfdff82e17071e2701cff4f211307"
	I0817 02:51:08.011723 1683677 cri.go:76] found id: "c2a72c583cef7c3ec36a5a3f51c03d8965ca637119ee385c625f97a331628ef9"
	I0817 02:51:08.011727 1683677 cri.go:76] found id: "fdb4bd345708970e1d90521f8d81da07a88c79e442c82c7c115a4c1c6ded93a0"
	I0817 02:51:08.011732 1683677 cri.go:76] found id: "f8b050af48208844c31f77ed2dc4fc25f4633ce187e85801e393aa0fce9c1ce0"
	I0817 02:51:08.011739 1683677 cri.go:76] found id: "2c48aa387b60234b5845590a62ab0933aef10e3afa1695cc7f5a93e93dc5b0c0"
	I0817 02:51:08.011744 1683677 cri.go:76] found id: "86ac8067fb1b5139e8f2e23b9daa6b76aa704ec28b4c4cf6d281c7293bc4259d"
	I0817 02:51:08.011751 1683677 cri.go:76] found id: ""
	I0817 02:51:08.011784 1683677 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:51:08.024123 1683677 cri.go:103] JSON = null
	W0817 02:51:08.024173 1683677 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0817 02:51:08.024224 1683677 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 02:51:08.030043 1683677 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 02:51:08.030082 1683677 kubeadm.go:600] restartCluster start
	I0817 02:51:08.030144 1683677 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 02:51:08.035616 1683677 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:08.036458 1683677 kubeconfig.go:117] verify returned: extract IP: "old-k8s-version-20210817024805-1554185" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:51:08.036738 1683677 kubeconfig.go:128] "old-k8s-version-20210817024805-1554185" context is missing from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0817 02:51:08.037289 1683677 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:51:08.039670 1683677 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 02:51:08.046928 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:08.046969 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:08.055883 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:08.256221 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:08.256294 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:08.265807 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:08.455991 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:08.456060 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:08.465176 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:08.656415 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:08.656478 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:08.665323 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:08.856532 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:08.856593 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:08.865713 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:09.055956 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:09.056022 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:09.064981 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:09.256298 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:09.256370 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:09.265478 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:09.456717 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:09.456811 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:09.468039 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:09.656380 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:09.656450 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:09.665870 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:09.856008 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:09.856072 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:09.867752 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:10.055998 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:10.056067 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:10.065541 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:10.256764 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:10.256842 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:10.266197 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:10.456442 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:10.456527 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:10.465620 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:10.655952 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:10.656017 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:10.665089 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:10.856298 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:10.856375 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:10.865884 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:11.056191 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:11.056256 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:11.065295 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:11.065312 1683677 api_server.go:164] Checking apiserver status ...
	I0817 02:51:11.065348 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:11.074104 1683677 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:11.074142 1683677 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0817 02:51:11.074162 1683677 kubeadm.go:1032] stopping kube-system containers ...
	I0817 02:51:11.074181 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 02:51:11.074234 1683677 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:51:11.096750 1683677 cri.go:76] found id: "0df39481530040e83d9382b05122709e89ff12a92da7e1353d5cc58d9e8fe7d1"
	I0817 02:51:11.096770 1683677 cri.go:76] found id: "8ecac5e4c8333e56f5d3388cb3cc94457b9a32adccbbdc1c519ff01e6c1dde98"
	I0817 02:51:11.096776 1683677 cri.go:76] found id: "fe2df3e7895de9ae44d0b5257823710fe9ddfdff82e17071e2701cff4f211307"
	I0817 02:51:11.096781 1683677 cri.go:76] found id: "c2a72c583cef7c3ec36a5a3f51c03d8965ca637119ee385c625f97a331628ef9"
	I0817 02:51:11.096806 1683677 cri.go:76] found id: "fdb4bd345708970e1d90521f8d81da07a88c79e442c82c7c115a4c1c6ded93a0"
	I0817 02:51:11.096811 1683677 cri.go:76] found id: "f8b050af48208844c31f77ed2dc4fc25f4633ce187e85801e393aa0fce9c1ce0"
	I0817 02:51:11.096815 1683677 cri.go:76] found id: "2c48aa387b60234b5845590a62ab0933aef10e3afa1695cc7f5a93e93dc5b0c0"
	I0817 02:51:11.096820 1683677 cri.go:76] found id: "86ac8067fb1b5139e8f2e23b9daa6b76aa704ec28b4c4cf6d281c7293bc4259d"
	I0817 02:51:11.096824 1683677 cri.go:76] found id: ""
	I0817 02:51:11.096831 1683677 cri.go:221] Stopping containers: [0df39481530040e83d9382b05122709e89ff12a92da7e1353d5cc58d9e8fe7d1 8ecac5e4c8333e56f5d3388cb3cc94457b9a32adccbbdc1c519ff01e6c1dde98 fe2df3e7895de9ae44d0b5257823710fe9ddfdff82e17071e2701cff4f211307 c2a72c583cef7c3ec36a5a3f51c03d8965ca637119ee385c625f97a331628ef9 fdb4bd345708970e1d90521f8d81da07a88c79e442c82c7c115a4c1c6ded93a0 f8b050af48208844c31f77ed2dc4fc25f4633ce187e85801e393aa0fce9c1ce0 2c48aa387b60234b5845590a62ab0933aef10e3afa1695cc7f5a93e93dc5b0c0 86ac8067fb1b5139e8f2e23b9daa6b76aa704ec28b4c4cf6d281c7293bc4259d]
	I0817 02:51:11.096885 1683677 ssh_runner.go:149] Run: which crictl
	I0817 02:51:11.099376 1683677 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 0df39481530040e83d9382b05122709e89ff12a92da7e1353d5cc58d9e8fe7d1 8ecac5e4c8333e56f5d3388cb3cc94457b9a32adccbbdc1c519ff01e6c1dde98 fe2df3e7895de9ae44d0b5257823710fe9ddfdff82e17071e2701cff4f211307 c2a72c583cef7c3ec36a5a3f51c03d8965ca637119ee385c625f97a331628ef9 fdb4bd345708970e1d90521f8d81da07a88c79e442c82c7c115a4c1c6ded93a0 f8b050af48208844c31f77ed2dc4fc25f4633ce187e85801e393aa0fce9c1ce0 2c48aa387b60234b5845590a62ab0933aef10e3afa1695cc7f5a93e93dc5b0c0 86ac8067fb1b5139e8f2e23b9daa6b76aa704ec28b4c4cf6d281c7293bc4259d
	I0817 02:51:11.122009 1683677 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 02:51:11.131177 1683677 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 02:51:11.137247 1683677 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5755 Aug 17 02:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5791 Aug 17 02:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5951 Aug 17 02:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5743 Aug 17 02:48 /etc/kubernetes/scheduler.conf
	
	I0817 02:51:11.137293 1683677 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0817 02:51:11.143258 1683677 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0817 02:51:11.149216 1683677 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0817 02:51:11.155222 1683677 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0817 02:51:11.161162 1683677 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 02:51:11.167066 1683677 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 02:51:11.167114 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:11.934066 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:14.379905 1683677 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.445810992s)
	I0817 02:51:14.379932 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:14.512814 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:14.574426 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:14.633653 1683677 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:51:14.633711 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:15.143968 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:15.643592 1683677 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:15.671581 1683677 api_server.go:70] duration metric: took 1.03792797s to wait for apiserver process to appear ...
	I0817 02:51:15.671600 1683677 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:51:15.671609 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:15.671898 1683677 api_server.go:255] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0817 02:51:16.172473 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:21.173334 1683677 api_server.go:255] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 02:51:21.672990 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:25.474293 1683677 api_server.go:265] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 02:51:25.474312 1683677 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 02:51:25.672640 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:25.773285 1683677 api_server.go:265] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 02:51:25.773350 1683677 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 02:51:26.172489 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:26.181455 1683677 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0817 02:51:26.181515 1683677 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0817 02:51:26.672039 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:26.684988 1683677 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0817 02:51:26.699432 1683677 api_server.go:139] control plane version: v1.14.0
	I0817 02:51:26.699470 1683677 api_server.go:129] duration metric: took 11.027864328s to wait for apiserver health ...
	I0817 02:51:26.699506 1683677 cni.go:93] Creating CNI manager for ""
	I0817 02:51:26.699525 1683677 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:51:26.701474 1683677 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:51:26.701537 1683677 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:51:26.704950 1683677 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0817 02:51:26.704967 1683677 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:51:26.716527 1683677 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:51:26.967205 1683677 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:51:26.979532 1683677 system_pods.go:59] 8 kube-system pods found
	I0817 02:51:26.979567 1683677 system_pods.go:61] "coredns-fb8b8dccf-jp8m9" [b23bfd69-ff05-11eb-b750-02420e977974] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 02:51:26.979574 1683677 system_pods.go:61] "etcd-old-k8s-version-20210817024805-1554185" [d5b20d31-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979604 1683677 system_pods.go:61] "kindnet-n5vgl" [b2493cdc-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979609 1683677 system_pods.go:61] "kube-apiserver-old-k8s-version-20210817024805-1554185" [d8145a64-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979621 1683677 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210817024805-1554185" [cfbc7b8c-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979625 1683677 system_pods.go:61] "kube-proxy-nhh5q" [b248f49f-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979630 1683677 system_pods.go:61] "kube-scheduler-old-k8s-version-20210817024805-1554185" [d2b8571e-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979640 1683677 system_pods.go:61] "storage-provisioner" [b32ef806-ff05-11eb-b750-02420e977974] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 02:51:26.979647 1683677 system_pods.go:74] duration metric: took 12.42679ms to wait for pod list to return data ...
	I0817 02:51:26.979670 1683677 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:51:26.982872 1683677 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:51:26.982920 1683677 node_conditions.go:123] node cpu capacity is 2
	I0817 02:51:26.982944 1683677 node_conditions.go:105] duration metric: took 3.261441ms to run NodePressure ...
	I0817 02:51:26.982957 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:27.127614 1683677 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0817 02:51:27.130997 1683677 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0817 02:51:27.503592 1683677 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0817 02:51:27.944862 1683677 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0817 02:51:28.477064 1683677 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0817 02:51:29.262325 1683677 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0817 02:51:30.768179 1683677 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0817 02:51:31.845796 1683677 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0817 02:51:33.718958 1683677 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0817 02:51:36.272473 1683677 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0817 02:51:41.408402 1683677 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0817 02:51:51.169969 1683677 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0817 02:52:10.112475 1683677 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0817 02:52:25.563354 1683677 kubeadm.go:746] kubelet initialised
	I0817 02:52:25.563374 1683677 kubeadm.go:747] duration metric: took 58.435709937s waiting for restarted kubelet to initialise ...
	I0817 02:52:25.563381 1683677 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:52:25.568074 1683677 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-jp8m9" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.577170 1683677 pod_ready.go:92] pod "coredns-fb8b8dccf-jp8m9" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.577194 1683677 pod_ready.go:81] duration metric: took 9.090528ms waiting for pod "coredns-fb8b8dccf-jp8m9" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.577204 1683677 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-mcnc6" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.580884 1683677 pod_ready.go:92] pod "coredns-fb8b8dccf-mcnc6" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.580901 1683677 pod_ready.go:81] duration metric: took 3.691246ms waiting for pod "coredns-fb8b8dccf-mcnc6" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.580908 1683677 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.584951 1683677 pod_ready.go:92] pod "etcd-old-k8s-version-20210817024805-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.584969 1683677 pod_ready.go:81] duration metric: took 4.05418ms waiting for pod "etcd-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.584979 1683677 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.588894 1683677 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210817024805-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.588913 1683677 pod_ready.go:81] duration metric: took 3.925311ms waiting for pod "kube-apiserver-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.588922 1683677 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.961580 1683677 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210817024805-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.961598 1683677 pod_ready.go:81] duration metric: took 372.668062ms waiting for pod "kube-controller-manager-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.961609 1683677 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nhh5q" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:26.362231 1683677 pod_ready.go:92] pod "kube-proxy-nhh5q" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:26.362251 1683677 pod_ready.go:81] duration metric: took 400.63554ms waiting for pod "kube-proxy-nhh5q" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:26.362261 1683677 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:26.761518 1683677 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210817024805-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:26.761538 1683677 pod_ready.go:81] duration metric: took 399.268628ms waiting for pod "kube-scheduler-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:26.761549 1683677 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:29.166693 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:31.166878 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:33.166960 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:35.667458 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:38.167056 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:40.167129 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:42.665923 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:44.674856 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:47.166577 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:49.167004 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:51.666930 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:54.166024 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:56.167473 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:58.667062 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:01.166392 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:03.166660 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:05.667248 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:08.167175 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:10.167277 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:12.666646 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:14.667107 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:17.166790 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:19.666869 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:21.666936 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:23.667266 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:25.667896 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:28.166786 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:30.166991 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:32.666672 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:34.674735 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:37.166927 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:39.667330 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:42.167056 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:44.666646 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:46.667864 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:49.166934 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:51.667345 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:54.166496 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:56.167389 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:58.666838 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:00.666895 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:03.166942 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:05.666841 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:08.166343 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:10.167428 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:12.666920 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:15.167557 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:17.667314 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:20.167520 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:22.667134 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:25.167010 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:27.167117 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:29.666570 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:32.166673 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:34.167205 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:36.167278 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:38.666736 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:41.166234 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:43.167351 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:45.666902 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:48.166473 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:50.167180 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:52.666625 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:55.166893 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:57.167206 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:59.667203 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:02.166246 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:04.166904 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:06.167605 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:08.666362 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:10.666627 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:13.166461 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:15.666987 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:17.667555 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:20.167216 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:22.666194 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:24.670235 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:27.166353 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:29.166945 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:31.666651 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:34.166901 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:36.666743 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:39.166482 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:41.167158 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:43.176080 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:45.666970 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:48.167040 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:50.667197 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:53.167284 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:55.666563 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:58.165868 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:00.166547 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:02.169578 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:04.666593 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:06.666772 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:08.668718 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:11.167180 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:13.666474 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:15.667122 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:17.667179 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:19.667252 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:22.166690 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:24.167040 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:26.666042 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:27.162685 1683677 pod_ready.go:81] duration metric: took 4m0.40112198s waiting for pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace to be "Ready" ...
	E0817 02:56:27.162707 1683677 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0817 02:56:27.162724 1683677 pod_ready.go:38] duration metric: took 4m1.599333201s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:56:27.162750 1683677 kubeadm.go:604] restartCluster took 5m19.132650156s
	W0817 02:56:27.162885 1683677 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 02:56:27.162914 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0817 02:56:29.771314 1683677 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.608376314s)
	I0817 02:56:29.771371 1683677 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0817 02:56:29.783800 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 02:56:29.783862 1683677 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:56:29.828158 1683677 cri.go:76] found id: ""
	I0817 02:56:29.828206 1683677 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 02:56:29.841550 1683677 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 02:56:29.841592 1683677 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 02:56:29.851739 1683677 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 02:56:29.851771 1683677 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 02:56:30.528257 1683677 out.go:204]   - Generating certificates and keys ...
	I0817 02:56:34.213997 1683677 out.go:204]   - Booting up control plane ...
	W0817 02:58:29.253234 1683677 out.go:242] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	I0817 02:58:29.253292 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0817 02:58:29.334564 1683677 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0817 02:58:29.344569 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 02:58:29.344631 1683677 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:58:29.380931 1683677 cri.go:76] found id: ""
	I0817 02:58:29.380972 1683677 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 02:58:29.381015 1683677 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 02:58:29.389355 1683677 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 02:58:29.389411 1683677 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 02:58:29.722372 1683677 out.go:204]   - Generating certificates and keys ...
	I0817 02:58:32.867979 1683677 out.go:204]   - Booting up control plane ...
	I0817 03:02:32.895057 1683677 kubeadm.go:392] StartCluster complete in 11m24.90769791s
	I0817 03:02:32.895103 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 03:02:32.895159 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 03:02:32.918790 1683677 cri.go:76] found id: ""
	I0817 03:02:32.918806 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.918827 1683677 logs.go:272] No container was found matching "kube-apiserver"
	I0817 03:02:32.918833 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 03:02:32.918883 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 03:02:32.949174 1683677 cri.go:76] found id: ""
	I0817 03:02:32.949187 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.949193 1683677 logs.go:272] No container was found matching "etcd"
	I0817 03:02:32.949198 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 03:02:32.949239 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 03:02:32.969914 1683677 cri.go:76] found id: ""
	I0817 03:02:32.969929 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.969935 1683677 logs.go:272] No container was found matching "coredns"
	I0817 03:02:32.969939 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 03:02:32.969977 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 03:02:32.990332 1683677 cri.go:76] found id: ""
	I0817 03:02:32.990347 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.990353 1683677 logs.go:272] No container was found matching "kube-scheduler"
	I0817 03:02:32.990358 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 03:02:32.990402 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 03:02:33.012039 1683677 cri.go:76] found id: ""
	I0817 03:02:33.012053 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.012059 1683677 logs.go:272] No container was found matching "kube-proxy"
	I0817 03:02:33.012064 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 03:02:33.012102 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 03:02:33.032711 1683677 cri.go:76] found id: ""
	I0817 03:02:33.032724 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.032729 1683677 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 03:02:33.032734 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 03:02:33.032772 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 03:02:33.052565 1683677 cri.go:76] found id: ""
	I0817 03:02:33.052577 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.052582 1683677 logs.go:272] No container was found matching "storage-provisioner"
	I0817 03:02:33.052588 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 03:02:33.052623 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 03:02:33.077480 1683677 cri.go:76] found id: ""
	I0817 03:02:33.077492 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.077498 1683677 logs.go:272] No container was found matching "kube-controller-manager"
	I0817 03:02:33.077506 1683677 logs.go:123] Gathering logs for container status ...
	I0817 03:02:33.077517 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 03:02:33.100699 1683677 logs.go:123] Gathering logs for kubelet ...
	I0817 03:02:33.100718 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 03:02:33.129118 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:27 old-k8s-version-20210817024805-1554185 kubelet[14430]: F0817 03:02:27.532257   14430 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.138717 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:28 old-k8s-version-20210817024805-1554185 kubelet[14458]: F0817 03:02:28.447274   14458 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.148323 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:29 old-k8s-version-20210817024805-1554185 kubelet[14486]: F0817 03:02:29.466067   14486 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.157836 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:30 old-k8s-version-20210817024805-1554185 kubelet[14514]: F0817 03:02:30.451892   14514 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.167337 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:31 old-k8s-version-20210817024805-1554185 kubelet[14542]: F0817 03:02:31.453780   14542 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.176828 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:32 old-k8s-version-20210817024805-1554185 kubelet[14570]: F0817 03:02:32.493682   14570 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.176996 1683677 logs.go:123] Gathering logs for dmesg ...
	I0817 03:02:33.177009 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 03:02:33.193572 1683677 logs.go:123] Gathering logs for describe nodes ...
	I0817 03:02:33.193593 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0817 03:02:33.276403 1683677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0817 03:02:33.276429 1683677 logs.go:123] Gathering logs for containerd ...
	I0817 03:02:33.276441 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W0817 03:02:33.361580 1683677 out.go:371] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	W0817 03:02:33.361625 1683677 out.go:242] * 
	* 
	W0817 03:02:33.361820 1683677 out.go:242] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0817 03:02:33.361869 1683677 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0817 03:02:33.367626 1683677 out.go:242] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                                  │
	│                                                                                                                                                                │
	│    * Please attach the following file to the GitHub issue:                                                                                                     │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                                  │
	│                                                                                                                                                                │
	│    * Please attach the following file to the GitHub issue:                                                                                                     │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 03:02:33.369951 1683677 out.go:177] X Problems detected in kubelet:
	I0817 03:02:33.371762 1683677 out.go:177]   Aug 17 03:02:27 old-k8s-version-20210817024805-1554185 kubelet[14430]: F0817 03:02:27.532257   14430 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.373375 1683677 out.go:177]   Aug 17 03:02:28 old-k8s-version-20210817024805-1554185 kubelet[14458]: F0817 03:02:28.447274   14458 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.375775 1683677 out.go:177]   Aug 17 03:02:29 old-k8s-version-20210817024805-1554185 kubelet[14486]: F0817 03:02:29.466067   14486 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.379649 1683677 out.go:177] 
	W0817 03:02:33.379877 1683677 out.go:242] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0817 03:02:33.379979 1683677 out.go:242] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0817 03:02:33.380044 1683677 out.go:242] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0817 03:02:33.382214 1683677 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-20210817024805-1554185 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210817024805-1554185
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210817024805-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29",
	        "Created": "2021-08-17T02:48:07.556948774Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1683873,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T02:50:51.260024317Z",
	            "FinishedAt": "2021-08-17T02:50:50.057096311Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29/hostname",
	        "HostsPath": "/var/lib/docker/containers/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29/hosts",
	        "LogPath": "/var/lib/docker/containers/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29-json.log",
	        "Name": "/old-k8s-version-20210817024805-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210817024805-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210817024805-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/236bd68e5aeeab8c38058476cfda09686a3fcf6be0c71c5ac7a1ca8635135a12-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/236bd68e5aeeab8c38058476cfda09686a3fcf6be0c71c5ac7a1ca8635135a12/merged",
	                "UpperDir": "/var/lib/docker/overlay2/236bd68e5aeeab8c38058476cfda09686a3fcf6be0c71c5ac7a1ca8635135a12/diff",
	                "WorkDir": "/var/lib/docker/overlay2/236bd68e5aeeab8c38058476cfda09686a3fcf6be0c71c5ac7a1ca8635135a12/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210817024805-1554185",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210817024805-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210817024805-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210817024805-1554185",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210817024805-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8db859a5f76fa1e2614ca4a38811cf6cdc70c3b63b0f36c6d5b6de8b99796396",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50467"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50464"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50466"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50465"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8db859a5f76f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210817024805-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c8b9fbcd517c",
	                        "old-k8s-version-20210817024805-1554185"
	                    ],
	                    "NetworkID": "9aefabdb2d1d911a23f12e9e262da9d968a8cfa23ed9a2191472a782b604d2a8",
	                    "EndpointID": "1f6b1ef1bd2c282d73335e7da0951a5c768f124f1509e6b7cd10bfc8e555b194",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210817024805-1554185 -n old-k8s-version-20210817024805-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210817024805-1554185 -n old-k8s-version-20210817024805-1554185: exit status 2 (312.482098ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-20210817024805-1554185 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 -p old-k8s-version-20210817024805-1554185 logs -n 25: exit status 110 (652.243895ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                      Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | kubernetes-upgrade-20210817024307-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:41 UTC | Tue, 17 Aug 2021 02:47:20 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185         |                                                   |         |         |                               |                               |
	|         | --memory=2200                                     |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| start   | -p                                                | force-systemd-flag-20210817024631-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:46:31 UTC | Tue, 17 Aug 2021 02:47:24 UTC |
	|         | force-systemd-flag-20210817024631-1554185         |                                                   |         |         |                               |                               |
	|         | --memory=2048 --force-systemd                     |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker            |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| -p      | force-systemd-flag-20210817024631-1554185         | force-systemd-flag-20210817024631-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:25 UTC | Tue, 17 Aug 2021 02:47:25 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                                   |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-flag-20210817024631-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:25 UTC | Tue, 17 Aug 2021 02:47:28 UTC |
	|         | force-systemd-flag-20210817024631-1554185         |                                                   |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210817024307-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:20 UTC | Tue, 17 Aug 2021 02:48:02 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185         |                                                   |         |         |                               |                               |
	|         | --memory=2200                                     |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210817024307-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:02 UTC | Tue, 17 Aug 2021 02:48:05 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185         |                                                   |         |         |                               |                               |
	| start   | -p                                                | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:28 UTC | Tue, 17 Aug 2021 02:48:49 UTC |
	|         | cert-options-20210817024728-1554185               |                                                   |         |         |                               |                               |
	|         | --memory=2048                                     |                                                   |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                                   |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                                   |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                                   |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| -p      | cert-options-20210817024728-1554185               | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:49 UTC | Tue, 17 Aug 2021 02:48:49 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                                   |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                                   |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:49 UTC | Tue, 17 Aug 2021 02:48:52 UTC |
	|         | cert-options-20210817024728-1554185               |                                                   |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:05 UTC | Tue, 17 Aug 2021 02:50:20 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                   |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                   |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                   |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:29 UTC | Tue, 17 Aug 2021 02:50:29 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:30 UTC | Tue, 17 Aug 2021 02:50:50 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:50 UTC | Tue, 17 Aug 2021 02:50:50 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:52 UTC | Tue, 17 Aug 2021 02:50:54 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:02 UTC | Tue, 17 Aug 2021 02:51:03 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:03 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:57:04 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:57:15 UTC | Tue, 17 Aug 2021 02:57:15 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:05 UTC | Tue, 17 Aug 2021 02:59:08 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:08 UTC | Tue, 17 Aug 2021 02:59:08 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:08 UTC | Tue, 17 Aug 2021 03:01:12 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:21 UTC | Tue, 17 Aug 2021 03:01:22 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:22 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 03:01:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 03:01:42.915636 1709430 out.go:298] Setting OutFile to fd 1 ...
	I0817 03:01:42.915815 1709430 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:01:42.915825 1709430 out.go:311] Setting ErrFile to fd 2...
	I0817 03:01:42.915829 1709430 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:01:42.915955 1709430 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 03:01:42.916188 1709430 out.go:305] Setting JSON to false
	I0817 03:01:42.917110 1709430 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38641,"bootTime":1629130662,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 03:01:42.917187 1709430 start.go:121] virtualization:  
	I0817 03:01:42.919362 1709430 out.go:177] * [embed-certs-20210817025908-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 03:01:42.920883 1709430 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 03:01:42.919510 1709430 notify.go:169] Checking for updates...
	I0817 03:01:42.922656 1709430 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:01:42.924352 1709430 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 03:01:42.926083 1709430 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 03:01:42.926489 1709430 config.go:177] Loaded profile config "embed-certs-20210817025908-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 03:01:42.926938 1709430 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 03:01:42.966220 1709430 docker.go:132] docker version: linux-20.10.8
	I0817 03:01:42.966292 1709430 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:01:43.109734 1709430 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:01:43.035488435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 03:01:43.109885 1709430 docker.go:244] overlay module found
	I0817 03:01:43.112560 1709430 out.go:177] * Using the docker driver based on existing profile
	I0817 03:01:43.112580 1709430 start.go:278] selected driver: docker
	I0817 03:01:43.112586 1709430 start.go:751] validating driver "docker" against &{Name:embed-certs-20210817025908-1554185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210817025908-1554185 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout
:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:01:43.112704 1709430 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 03:01:43.112741 1709430 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:01:43.112750 1709430 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:01:43.113917 1709430 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:01:43.114457 1709430 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:01:43.240185 1709430 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:01:43.161496688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	W0817 03:01:43.240305 1709430 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:01:43.240324 1709430 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:01:43.242084 1709430 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:01:43.242179 1709430 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 03:01:43.242202 1709430 cni.go:93] Creating CNI manager for ""
	I0817 03:01:43.242210 1709430 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:01:43.242235 1709430 start_flags.go:277] config:
	{Name:embed-certs-20210817025908-1554185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210817025908-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeReque
sted:false ExtraDisks:0}
	I0817 03:01:43.244150 1709430 out.go:177] * Starting control plane node embed-certs-20210817025908-1554185 in cluster embed-certs-20210817025908-1554185
	I0817 03:01:43.244175 1709430 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 03:01:43.245724 1709430 out.go:177] * Pulling base image ...
	I0817 03:01:43.245741 1709430 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 03:01:43.245775 1709430 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 03:01:43.245783 1709430 cache.go:56] Caching tarball of preloaded images
	I0817 03:01:43.245933 1709430 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 03:01:43.245947 1709430 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 03:01:43.246055 1709430 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/config.json ...
	I0817 03:01:43.246214 1709430 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 03:01:43.302552 1709430 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 03:01:43.302578 1709430 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 03:01:43.302588 1709430 cache.go:205] Successfully downloaded all kic artifacts
	I0817 03:01:43.302624 1709430 start.go:313] acquiring machines lock for embed-certs-20210817025908-1554185: {Name:mkc8f6524c9d90ccbc42094864dd90d7c2463223 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:01:43.302708 1709430 start.go:317] acquired machines lock for "embed-certs-20210817025908-1554185" in 58.248µs
	I0817 03:01:43.302730 1709430 start.go:93] Skipping create...Using existing machine configuration
	I0817 03:01:43.302735 1709430 fix.go:55] fixHost starting: 
	I0817 03:01:43.303098 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:01:43.333163 1709430 fix.go:108] recreateIfNeeded on embed-certs-20210817025908-1554185: state=Stopped err=<nil>
	W0817 03:01:43.333191 1709430 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 03:01:43.335145 1709430 out.go:177] * Restarting existing docker container for "embed-certs-20210817025908-1554185" ...
	I0817 03:01:43.335200 1709430 cli_runner.go:115] Run: docker start embed-certs-20210817025908-1554185
	I0817 03:01:43.688530 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:01:43.732399 1709430 kic.go:420] container "embed-certs-20210817025908-1554185" state is running.
	I0817 03:01:43.732746 1709430 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210817025908-1554185
	I0817 03:01:43.780842 1709430 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/config.json ...
	I0817 03:01:43.781022 1709430 machine.go:88] provisioning docker machine ...
	I0817 03:01:43.781036 1709430 ubuntu.go:169] provisioning hostname "embed-certs-20210817025908-1554185"
	I0817 03:01:43.781081 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:43.818557 1709430 main.go:130] libmachine: Using SSH client type: native
	I0817 03:01:43.819051 1709430 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50483 <nil> <nil>}
	I0817 03:01:43.819126 1709430 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210817025908-1554185 && echo "embed-certs-20210817025908-1554185" | sudo tee /etc/hostname
	I0817 03:01:43.819693 1709430 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43842->127.0.0.1:50483: read: connection reset by peer
	I0817 03:01:46.941429 1709430 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210817025908-1554185
	
	I0817 03:01:46.941509 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:46.973475 1709430 main.go:130] libmachine: Using SSH client type: native
	I0817 03:01:46.973643 1709430 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50483 <nil> <nil>}
	I0817 03:01:46.973672 1709430 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210817025908-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210817025908-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210817025908-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 03:01:47.098196 1709430 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 03:01:47.098263 1709430 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 03:01:47.098302 1709430 ubuntu.go:177] setting up certificates
	I0817 03:01:47.098337 1709430 provision.go:83] configureAuth start
	I0817 03:01:47.098419 1709430 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210817025908-1554185
	I0817 03:01:47.140418 1709430 provision.go:138] copyHostCerts
	I0817 03:01:47.140475 1709430 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 03:01:47.140490 1709430 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 03:01:47.140552 1709430 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 03:01:47.140638 1709430 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 03:01:47.140647 1709430 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 03:01:47.140669 1709430 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 03:01:47.140724 1709430 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 03:01:47.140732 1709430 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 03:01:47.140752 1709430 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 03:01:47.140796 1709430 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210817025908-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210817025908-1554185]
	I0817 03:01:47.563754 1709430 provision.go:172] copyRemoteCerts
	I0817 03:01:47.563839 1709430 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 03:01:47.563897 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:47.594589 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:47.676748 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 03:01:47.691618 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0817 03:01:47.707188 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 03:01:47.721435 1709430 provision.go:86] duration metric: configureAuth took 623.075101ms
	I0817 03:01:47.721456 1709430 ubuntu.go:193] setting minikube options for container-runtime
	I0817 03:01:47.721620 1709430 config.go:177] Loaded profile config "embed-certs-20210817025908-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 03:01:47.721633 1709430 machine.go:91] provisioned docker machine in 3.94060428s
	I0817 03:01:47.721640 1709430 start.go:267] post-start starting for "embed-certs-20210817025908-1554185" (driver="docker")
	I0817 03:01:47.721653 1709430 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 03:01:47.721699 1709430 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 03:01:47.721738 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:47.753024 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:47.836811 1709430 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 03:01:47.839115 1709430 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 03:01:47.839138 1709430 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 03:01:47.839151 1709430 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 03:01:47.839156 1709430 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 03:01:47.839164 1709430 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 03:01:47.839207 1709430 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 03:01:47.839292 1709430 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 03:01:47.839383 1709430 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 03:01:47.845028 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:01:47.859892 1709430 start.go:270] post-start completed in 138.235488ms
	I0817 03:01:47.862563 1709430 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 03:01:47.862604 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:47.895396 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:47.981879 1709430 fix.go:57] fixHost completed within 4.679139366s
	I0817 03:01:47.981902 1709430 start.go:80] releasing machines lock for "embed-certs-20210817025908-1554185", held for 4.679182122s
	I0817 03:01:47.981973 1709430 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210817025908-1554185
	I0817 03:01:48.018361 1709430 ssh_runner.go:149] Run: systemctl --version
	I0817 03:01:48.018413 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:48.018620 1709430 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 03:01:48.018669 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:48.084825 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:48.109792 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:48.182191 1709430 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 03:01:48.477295 1709430 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 03:01:48.486407 1709430 docker.go:153] disabling docker service ...
	I0817 03:01:48.486452 1709430 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 03:01:48.495395 1709430 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 03:01:48.503277 1709430 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 03:01:48.574458 1709430 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 03:01:48.650525 1709430 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 03:01:48.658005 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 03:01:48.668723 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 03:01:48.680039 1709430 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 03:01:48.685607 1709430 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 03:01:48.691075 1709430 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 03:01:48.770865 1709430 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 03:01:48.856461 1709430 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 03:01:48.856555 1709430 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 03:01:48.860033 1709430 start.go:413] Will wait 60s for crictl version
	I0817 03:01:48.860113 1709430 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:01:48.885394 1709430 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T03:01:48Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 03:01:59.932195 1709430 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:01:59.954055 1709430 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 03:01:59.954117 1709430 ssh_runner.go:149] Run: containerd --version
	I0817 03:01:59.974914 1709430 ssh_runner.go:149] Run: containerd --version
	I0817 03:01:59.996782 1709430 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 03:01:59.996854 1709430 cli_runner.go:115] Run: docker network inspect embed-certs-20210817025908-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 03:02:00.034307 1709430 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 03:02:00.037446 1709430 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:02:00.046058 1709430 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 03:02:00.046122 1709430 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:02:00.081340 1709430 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:02:00.081357 1709430 containerd.go:517] Images already preloaded, skipping extraction
	I0817 03:02:00.081401 1709430 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:02:00.108655 1709430 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:02:00.108676 1709430 cache_images.go:74] Images are preloaded, skipping loading
	I0817 03:02:00.108741 1709430 ssh_runner.go:149] Run: sudo crictl info
	I0817 03:02:00.143555 1709430 cni.go:93] Creating CNI manager for ""
	I0817 03:02:00.143577 1709430 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:02:00.143588 1709430 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 03:02:00.143605 1709430 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210817025908-1554185 NodeName:embed-certs-20210817025908-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 03:02:00.143742 1709430 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20210817025908-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 03:02:00.143826 1709430 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20210817025908-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210817025908-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 03:02:00.143885 1709430 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 03:02:00.151550 1709430 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 03:02:00.151608 1709430 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 03:02:00.158110 1709430 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (579 bytes)
	I0817 03:02:00.172909 1709430 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 03:02:00.185604 1709430 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0817 03:02:00.198148 1709430 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 03:02:00.202587 1709430 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:02:00.211935 1709430 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185 for IP: 192.168.49.2
	I0817 03:02:00.211985 1709430 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 03:02:00.212005 1709430 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 03:02:00.212058 1709430 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/client.key
	I0817 03:02:00.212079 1709430 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/apiserver.key.dd3b5fb2
	I0817 03:02:00.212099 1709430 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/proxy-client.key
	I0817 03:02:00.212189 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 03:02:00.212226 1709430 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 03:02:00.212240 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 03:02:00.212263 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 03:02:00.212302 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 03:02:00.212327 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 03:02:00.212374 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:02:00.213402 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 03:02:00.233903 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 03:02:00.257339 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 03:02:00.272567 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 03:02:00.287332 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 03:02:00.303591 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 03:02:00.323416 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 03:02:00.338181 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 03:02:00.352831 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 03:02:00.367365 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 03:02:00.381902 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 03:02:00.396438 1709430 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 03:02:00.407669 1709430 ssh_runner.go:149] Run: openssl version
	I0817 03:02:00.411901 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 03:02:00.417999 1709430 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 03:02:00.420591 1709430 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 03:02:00.420649 1709430 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 03:02:00.424886 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 03:02:00.430590 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 03:02:00.436691 1709430 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:02:00.439330 1709430 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:02:00.439385 1709430 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:02:00.443503 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 03:02:00.449205 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 03:02:00.455268 1709430 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 03:02:00.457833 1709430 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 03:02:00.457897 1709430 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 03:02:00.462009 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 03:02:00.467819 1709430 kubeadm.go:390] StartCluster: {Name:embed-certs-20210817025908-1554185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210817025908-1554185 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:
<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:02:00.467915 1709430 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 03:02:00.467970 1709430 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:02:00.490420 1709430 cri.go:76] found id: "3819e01cd0f969187e57a4fc98222a3ea0a77d2454e68ae4ff98e41dcc7b0dc0"
	I0817 03:02:00.490438 1709430 cri.go:76] found id: "147743153867e7bf2b79e1d98cdc96505cf6c5e444c638a5408425fb70ca87ab"
	I0817 03:02:00.490443 1709430 cri.go:76] found id: "45a6ad2bca9734848b2137f94e723e008ca4f9cc5d608fa41c9369f26199484b"
	I0817 03:02:00.490448 1709430 cri.go:76] found id: "26d57a9cd98538e0290ecfdecf0aa55fda8c4c1e494d7a693872b191c1e219ee"
	I0817 03:02:00.490452 1709430 cri.go:76] found id: "b83c757a97acfc28207d3b12c22e7e51480e2b318d6e31cd05d9fb8ba40be727"
	I0817 03:02:00.490457 1709430 cri.go:76] found id: "c587ada33b489b69e8309fa4e6da6008b66f1115ea3dfaf148d54dc4fd5e0d3e"
	I0817 03:02:00.490463 1709430 cri.go:76] found id: "c1a9cf0a32385766092ed94ffabdd1ca185dd517336d62b9acc0fa629e0e2775"
	I0817 03:02:00.490468 1709430 cri.go:76] found id: "43637e1ccb03e97bf246cf21f7f40e791e999d191b70bb361f253d5e28f7711e"
	I0817 03:02:00.490478 1709430 cri.go:76] found id: ""
	I0817 03:02:00.490512 1709430 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:02:00.502926 1709430 cri.go:103] JSON = null
	W0817 03:02:00.502962 1709430 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0817 03:02:00.503016 1709430 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 03:02:00.508722 1709430 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 03:02:00.508744 1709430 kubeadm.go:600] restartCluster start
	I0817 03:02:00.508777 1709430 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 03:02:00.514130 1709430 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:00.514999 1709430 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210817025908-1554185" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:02:00.515236 1709430 kubeconfig.go:128] "embed-certs-20210817025908-1554185" context is missing from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0817 03:02:00.515752 1709430 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:02:00.517932 1709430 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 03:02:00.523411 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:00.523458 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:00.532371 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:00.732714 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:00.732781 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:00.741454 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:00.932728 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:00.932776 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:00.941590 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.132831 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.132935 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.143133 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.333433 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.333504 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.342847 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.533149 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.533202 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.542098 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.733346 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.733423 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.742171 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.933424 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.933503 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.942215 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.133501 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.133589 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.144077 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.333428 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.333518 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.342978 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.533200 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.533285 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.542303 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.732496 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.732541 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.741347 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.932764 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.932815 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.941561 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.132828 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:03.132954 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:03.145350 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.332600 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:03.332663 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:03.341975 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.533200 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:03.533260 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:03.542160 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.542171 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:03.542205 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:03.550805 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.550831 1709430 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0817 03:02:03.550837 1709430 kubeadm.go:1032] stopping kube-system containers ...
	I0817 03:02:03.550848 1709430 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 03:02:03.550890 1709430 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:02:03.573081 1709430 cri.go:76] found id: "3819e01cd0f969187e57a4fc98222a3ea0a77d2454e68ae4ff98e41dcc7b0dc0"
	I0817 03:02:03.573100 1709430 cri.go:76] found id: "147743153867e7bf2b79e1d98cdc96505cf6c5e444c638a5408425fb70ca87ab"
	I0817 03:02:03.573105 1709430 cri.go:76] found id: "45a6ad2bca9734848b2137f94e723e008ca4f9cc5d608fa41c9369f26199484b"
	I0817 03:02:03.573110 1709430 cri.go:76] found id: "26d57a9cd98538e0290ecfdecf0aa55fda8c4c1e494d7a693872b191c1e219ee"
	I0817 03:02:03.573115 1709430 cri.go:76] found id: "b83c757a97acfc28207d3b12c22e7e51480e2b318d6e31cd05d9fb8ba40be727"
	I0817 03:02:03.573120 1709430 cri.go:76] found id: "c587ada33b489b69e8309fa4e6da6008b66f1115ea3dfaf148d54dc4fd5e0d3e"
	I0817 03:02:03.573125 1709430 cri.go:76] found id: "c1a9cf0a32385766092ed94ffabdd1ca185dd517336d62b9acc0fa629e0e2775"
	I0817 03:02:03.573129 1709430 cri.go:76] found id: "43637e1ccb03e97bf246cf21f7f40e791e999d191b70bb361f253d5e28f7711e"
	I0817 03:02:03.573133 1709430 cri.go:76] found id: ""
	I0817 03:02:03.573138 1709430 cri.go:221] Stopping containers: [3819e01cd0f969187e57a4fc98222a3ea0a77d2454e68ae4ff98e41dcc7b0dc0 147743153867e7bf2b79e1d98cdc96505cf6c5e444c638a5408425fb70ca87ab 45a6ad2bca9734848b2137f94e723e008ca4f9cc5d608fa41c9369f26199484b 26d57a9cd98538e0290ecfdecf0aa55fda8c4c1e494d7a693872b191c1e219ee b83c757a97acfc28207d3b12c22e7e51480e2b318d6e31cd05d9fb8ba40be727 c587ada33b489b69e8309fa4e6da6008b66f1115ea3dfaf148d54dc4fd5e0d3e c1a9cf0a32385766092ed94ffabdd1ca185dd517336d62b9acc0fa629e0e2775 43637e1ccb03e97bf246cf21f7f40e791e999d191b70bb361f253d5e28f7711e]
	I0817 03:02:03.573180 1709430 ssh_runner.go:149] Run: which crictl
	I0817 03:02:03.575701 1709430 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 3819e01cd0f969187e57a4fc98222a3ea0a77d2454e68ae4ff98e41dcc7b0dc0 147743153867e7bf2b79e1d98cdc96505cf6c5e444c638a5408425fb70ca87ab 45a6ad2bca9734848b2137f94e723e008ca4f9cc5d608fa41c9369f26199484b 26d57a9cd98538e0290ecfdecf0aa55fda8c4c1e494d7a693872b191c1e219ee b83c757a97acfc28207d3b12c22e7e51480e2b318d6e31cd05d9fb8ba40be727 c587ada33b489b69e8309fa4e6da6008b66f1115ea3dfaf148d54dc4fd5e0d3e c1a9cf0a32385766092ed94ffabdd1ca185dd517336d62b9acc0fa629e0e2775 43637e1ccb03e97bf246cf21f7f40e791e999d191b70bb361f253d5e28f7711e
	I0817 03:02:03.598212 1709430 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 03:02:03.607086 1709430 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 03:02:03.613074 1709430 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 17 02:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 17 02:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2075 Aug 17 03:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 17 02:59 /etc/kubernetes/scheduler.conf
	
	I0817 03:02:03.613128 1709430 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0817 03:02:03.618914 1709430 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0817 03:02:03.624793 1709430 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0817 03:02:03.630292 1709430 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.630355 1709430 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 03:02:03.635919 1709430 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0817 03:02:03.641434 1709430 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.641502 1709430 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 03:02:03.646893 1709430 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 03:02:03.652576 1709430 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 03:02:03.652614 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:03.715531 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:05.469382 1709430 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.753788332s)
	I0817 03:02:05.469407 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:05.630841 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:05.737765 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:05.802641 1709430 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:02:05.802701 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:06.312308 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:06.812440 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:07.311850 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:07.811839 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:08.312759 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:08.811837 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:09.311781 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:09.811827 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:10.312505 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:10.812802 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:11.311853 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:11.811838 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:12.312766 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:12.811823 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:13.312571 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:13.812682 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:14.311986 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:14.812787 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:15.312782 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:15.350503 1709430 api_server.go:70] duration metric: took 9.547861543s to wait for apiserver process to appear ...
	I0817 03:02:15.350522 1709430 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:02:15.350531 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:20.352792 1709430 api_server.go:255] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 03:02:20.853542 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:22.286600 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 03:02:22.286661 1709430 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 03:02:22.353817 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:22.428645 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 03:02:22.428696 1709430 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 03:02:22.853168 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:22.900944 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:02:22.900974 1709430 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:02:23.353240 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:23.365866 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:02:23.365915 1709430 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:02:23.853552 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:23.862728 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:02:23.880276 1709430 api_server.go:139] control plane version: v1.21.3
	I0817 03:02:23.880298 1709430 api_server.go:129] duration metric: took 8.529771373s to wait for apiserver health ...
	I0817 03:02:23.880307 1709430 cni.go:93] Creating CNI manager for ""
	I0817 03:02:23.880320 1709430 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:02:23.882619 1709430 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 03:02:23.882682 1709430 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 03:02:23.887761 1709430 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 03:02:23.887780 1709430 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 03:02:23.901886 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 03:02:24.613061 1709430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:02:24.627170 1709430 system_pods.go:59] 9 kube-system pods found
	I0817 03:02:24.627205 1709430 system_pods.go:61] "coredns-558bd4d5db-dgbzs" [69a5e40e-9bca-4e76-976f-7e87232e2501] Running
	I0817 03:02:24.627214 1709430 system_pods.go:61] "etcd-embed-certs-20210817025908-1554185" [7e3ff9cb-4663-44f8-bdeb-a6851dd56f03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 03:02:24.627235 1709430 system_pods.go:61] "kindnet-6s6ww" [582e5a12-d987-4cc2-b439-264038f7fdec] Running
	I0817 03:02:24.627250 1709430 system_pods.go:61] "kube-apiserver-embed-certs-20210817025908-1554185" [d2b29440-c8bb-4946-99af-a8f6af9d310e] Running
	I0817 03:02:24.627255 1709430 system_pods.go:61] "kube-controller-manager-embed-certs-20210817025908-1554185" [055f695a-0d98-43bb-bf98-4ef9b42a8f36] Running
	I0817 03:02:24.627259 1709430 system_pods.go:61] "kube-proxy-nxbdw" [f0cef6b9-79b0-4944-917c-a3a5d3ac0488] Running
	I0817 03:02:24.627272 1709430 system_pods.go:61] "kube-scheduler-embed-certs-20210817025908-1554185" [fc3d4c1d-1efb-47ec-bf4d-3b8f51f07643] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 03:02:24.627281 1709430 system_pods.go:61] "metrics-server-7c784ccb57-7snbh" [1e2242b2-d474-4e68-b3be-5c357740f82f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:02:24.627290 1709430 system_pods.go:61] "storage-provisioner" [486f6174-9eff-4afd-8b28-7f7f218f6341] Running
	I0817 03:02:24.627296 1709430 system_pods.go:74] duration metric: took 14.217459ms to wait for pod list to return data ...
	I0817 03:02:24.627312 1709430 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:02:24.630756 1709430 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:02:24.630785 1709430 node_conditions.go:123] node cpu capacity is 2
	I0817 03:02:24.630797 1709430 node_conditions.go:105] duration metric: took 3.48013ms to run NodePressure ...
	I0817 03:02:24.630836 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:24.880793 1709430 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0817 03:02:24.884801 1709430 kubeadm.go:746] kubelet initialised
	I0817 03:02:24.884820 1709430 kubeadm.go:747] duration metric: took 4.010036ms waiting for restarted kubelet to initialise ...
	I0817 03:02:24.884827 1709430 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:02:24.889652 1709430 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-dgbzs" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:24.902669 1709430 pod_ready.go:92] pod "coredns-558bd4d5db-dgbzs" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:24.902694 1709430 pod_ready.go:81] duration metric: took 13.016142ms waiting for pod "coredns-558bd4d5db-dgbzs" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:24.902704 1709430 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:26.912028 1709430 pod_ready.go:102] pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:29.412961 1709430 pod_ready.go:102] pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:31.412833 1709430 pod_ready.go:92] pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:31.412855 1709430 pod_ready.go:81] duration metric: took 6.510143114s waiting for pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:31.412884 1709430 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.895057 1683677 kubeadm.go:392] StartCluster complete in 11m24.90769791s
	I0817 03:02:32.895103 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 03:02:32.895159 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 03:02:32.918790 1683677 cri.go:76] found id: ""
	I0817 03:02:32.918806 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.918827 1683677 logs.go:272] No container was found matching "kube-apiserver"
	I0817 03:02:32.918833 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 03:02:32.918883 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 03:02:32.949174 1683677 cri.go:76] found id: ""
	I0817 03:02:32.949187 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.949193 1683677 logs.go:272] No container was found matching "etcd"
	I0817 03:02:32.949198 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 03:02:32.949239 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 03:02:32.969914 1683677 cri.go:76] found id: ""
	I0817 03:02:32.969929 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.969935 1683677 logs.go:272] No container was found matching "coredns"
	I0817 03:02:32.969939 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 03:02:32.969977 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 03:02:32.990332 1683677 cri.go:76] found id: ""
	I0817 03:02:32.990347 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.990353 1683677 logs.go:272] No container was found matching "kube-scheduler"
	I0817 03:02:32.990358 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 03:02:32.990402 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 03:02:33.012039 1683677 cri.go:76] found id: ""
	I0817 03:02:33.012053 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.012059 1683677 logs.go:272] No container was found matching "kube-proxy"
	I0817 03:02:33.012064 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 03:02:33.012102 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 03:02:33.032711 1683677 cri.go:76] found id: ""
	I0817 03:02:33.032724 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.032729 1683677 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 03:02:33.032734 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 03:02:33.032772 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 03:02:33.052565 1683677 cri.go:76] found id: ""
	I0817 03:02:33.052577 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.052582 1683677 logs.go:272] No container was found matching "storage-provisioner"
	I0817 03:02:33.052588 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 03:02:33.052623 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 03:02:33.077480 1683677 cri.go:76] found id: ""
	I0817 03:02:33.077492 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.077498 1683677 logs.go:272] No container was found matching "kube-controller-manager"
	I0817 03:02:33.077506 1683677 logs.go:123] Gathering logs for container status ...
	I0817 03:02:33.077517 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 03:02:33.100699 1683677 logs.go:123] Gathering logs for kubelet ...
	I0817 03:02:33.100718 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 03:02:33.129118 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:27 old-k8s-version-20210817024805-1554185 kubelet[14430]: F0817 03:02:27.532257   14430 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.138717 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:28 old-k8s-version-20210817024805-1554185 kubelet[14458]: F0817 03:02:28.447274   14458 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.148323 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:29 old-k8s-version-20210817024805-1554185 kubelet[14486]: F0817 03:02:29.466067   14486 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.157836 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:30 old-k8s-version-20210817024805-1554185 kubelet[14514]: F0817 03:02:30.451892   14514 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.167337 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:31 old-k8s-version-20210817024805-1554185 kubelet[14542]: F0817 03:02:31.453780   14542 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.176828 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:32 old-k8s-version-20210817024805-1554185 kubelet[14570]: F0817 03:02:32.493682   14570 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.176996 1683677 logs.go:123] Gathering logs for dmesg ...
	I0817 03:02:33.177009 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 03:02:33.193572 1683677 logs.go:123] Gathering logs for describe nodes ...
	I0817 03:02:33.193593 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0817 03:02:33.276403 1683677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0817 03:02:33.276429 1683677 logs.go:123] Gathering logs for containerd ...
	I0817 03:02:33.276441 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W0817 03:02:33.361580 1683677 out.go:371] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	W0817 03:02:33.361625 1683677 out.go:242] * 
	W0817 03:02:33.361820 1683677 out.go:242] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0817 03:02:33.361869 1683677 out.go:242] * 
	W0817 03:02:33.367626 1683677 out.go:242] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                                  │
	│                                                                                                                                                                │
	│    * Please attach the following file to the GitHub issue:                                                                                                     │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 03:02:33.369951 1683677 out.go:177] X Problems detected in kubelet:
	I0817 03:02:33.371762 1683677 out.go:177]   Aug 17 03:02:27 old-k8s-version-20210817024805-1554185 kubelet[14430]: F0817 03:02:27.532257   14430 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.373375 1683677 out.go:177]   Aug 17 03:02:28 old-k8s-version-20210817024805-1554185 kubelet[14458]: F0817 03:02:28.447274   14458 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.375775 1683677 out.go:177]   Aug 17 03:02:29 old-k8s-version-20210817024805-1554185 kubelet[14486]: F0817 03:02:29.466067   14486 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.379649 1683677 out.go:177] 
	W0817 03:02:33.379877 1683677 out.go:242] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0817 03:02:33.379979 1683677 out.go:242] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0817 03:02:33.380044 1683677 out.go:242] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 02:50:51 UTC, end at Tue 2021-08-17 03:02:34 UTC. --
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.468623695Z" level=info msg="RemovePodSandbox \"7d53e801511ed07e6fabcb3c88dd69fd2c4ef7c3c028e9e44605be1ffc98ba60\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.491869824Z" level=info msg="StopPodSandbox for \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.491924815Z" level=info msg="Container to stop \"fdb4bd345708970e1d90521f8d81da07a88c79e442c82c7c115a4c1c6ded93a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.491993187Z" level=info msg="TearDown network for sandbox \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\" successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.492004092Z" level=info msg="StopPodSandbox for \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.526895702Z" level=info msg="RemovePodSandbox for \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.552169798Z" level=info msg="RemovePodSandbox \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.579967120Z" level=info msg="StopPodSandbox for \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.580012288Z" level=info msg="Container to stop \"2c48aa387b60234b5845590a62ab0933aef10e3afa1695cc7f5a93e93dc5b0c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.580081120Z" level=info msg="TearDown network for sandbox \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\" successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.580094043Z" level=info msg="StopPodSandbox for \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.609914286Z" level=info msg="RemovePodSandbox for \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.620668256Z" level=info msg="RemovePodSandbox \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.650881963Z" level=info msg="StopPodSandbox for \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.650932834Z" level=info msg="Container to stop \"86ac8067fb1b5139e8f2e23b9daa6b76aa704ec28b4c4cf6d281c7293bc4259d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.650992936Z" level=info msg="TearDown network for sandbox \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\" successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.651004021Z" level=info msg="StopPodSandbox for \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.680816075Z" level=info msg="RemovePodSandbox for \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.694511521Z" level=info msg="RemovePodSandbox \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.724328999Z" level=info msg="StopPodSandbox for \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.724372674Z" level=info msg="Container to stop \"f8b050af48208844c31f77ed2dc4fc25f4633ce187e85801e393aa0fce9c1ce0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.724442655Z" level=info msg="TearDown network for sandbox \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\" successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.724453609Z" level=info msg="StopPodSandbox for \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.749234435Z" level=info msg="RemovePodSandbox for \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.758871386Z" level=info msg="RemovePodSandbox \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> kernel <==
	*  03:02:34 up 10:44,  0 users,  load average: 2.10, 1.83, 1.69
	Linux old-k8s-version-20210817024805-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 02:50:51 UTC, end at Tue 2021-08-17 03:02:34 UTC. --
	Aug 17 03:02:33 old-k8s-version-20210817024805-1554185 kubelet[14697]: I0817 03:02:33.656049   14697 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
	Aug 17 03:02:33 old-k8s-version-20210817024805-1554185 kubelet[14697]: I0817 03:02:33.657108   14697 cpu_manager.go:155] [cpumanager] starting with none policy
	Aug 17 03:02:33 old-k8s-version-20210817024805-1554185 kubelet[14697]: I0817 03:02:33.657189   14697 cpu_manager.go:156] [cpumanager] reconciling every 10s
	Aug 17 03:02:33 old-k8s-version-20210817024805-1554185 kubelet[14697]: I0817 03:02:33.657199   14697 policy_none.go:42] [cpumanager] none policy: Start
	Aug 17 03:02:33 old-k8s-version-20210817024805-1554185 kubelet[14697]: F0817 03:02:33.658336   14697 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	Aug 17 03:02:33 old-k8s-version-20210817024805-1554185 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 17 03:02:33 old-k8s-version-20210817024805-1554185 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 240.
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: Flag --allow-privileged has been deprecated, will be removed in a future version
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: Flag --allow-privileged has been deprecated, will be removed in a future version
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: I0817 03:02:34.572871   14839 server.go:417] Version: v1.14.0
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: I0817 03:02:34.573226   14839 plugins.go:103] No cloud provider specified.
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: I0817 03:02:34.573247   14839 server.go:754] Client rotation is on, will bootstrap in background
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: I0817 03:02:34.575843   14839 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: I0817 03:02:34.584749   14839 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: I0817 03:02:34.585204   14839 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: I0817 03:02:34.585297   14839 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: I0817 03:02:34.585438   14839 container_manager_linux.go:286] Creating device plugin manager: true
	Aug 17 03:02:34 old-k8s-version-20210817024805-1554185 kubelet[14839]: I0817 03:02:34.585522   14839 state_mem.go:36] [cpumanager] initializing new in-memory state store
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 03:02:34.537418 1713149 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (704.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (109.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-different-port-20210817024852-1554185 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-different-port-20210817024852-1554185 --alsologtostderr -v=1: exit status 80 (2.017256497s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-different-port-20210817024852-1554185 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 02:57:15.556318 1697275 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:57:15.556838 1697275 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:57:15.556850 1697275 out.go:311] Setting ErrFile to fd 2...
	I0817 02:57:15.556854 1697275 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:57:15.556988 1697275 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:57:15.557158 1697275 out.go:305] Setting JSON to false
	I0817 02:57:15.557186 1697275 mustload.go:65] Loading cluster: default-k8s-different-port-20210817024852-1554185
	I0817 02:57:15.557548 1697275 config.go:177] Loaded profile config "default-k8s-different-port-20210817024852-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:57:15.558006 1697275 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:57:15.595305 1697275 host.go:66] Checking if "default-k8s-different-port-20210817024852-1554185" exists ...
	I0817 02:57:15.596021 1697275 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-different-port-20210817024852-1554185 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0817 02:57:15.598293 1697275 out.go:177] * Pausing node default-k8s-different-port-20210817024852-1554185 ... 
	I0817 02:57:15.598312 1697275 host.go:66] Checking if "default-k8s-different-port-20210817024852-1554185" exists ...
	I0817 02:57:15.598596 1697275 ssh_runner.go:149] Run: systemctl --version
	I0817 02:57:15.598635 1697275 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:57:15.636287 1697275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:57:15.738453 1697275 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:57:15.747276 1697275 pause.go:50] kubelet running: true
	I0817 02:57:15.747322 1697275 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 02:57:15.974784 1697275 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 02:57:15.974866 1697275 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 02:57:16.062608 1697275 cri.go:76] found id: "69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8"
	I0817 02:57:16.062633 1697275 cri.go:76] found id: "99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b"
	I0817 02:57:16.062638 1697275 cri.go:76] found id: "9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673"
	I0817 02:57:16.062643 1697275 cri.go:76] found id: "1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18"
	I0817 02:57:16.062647 1697275 cri.go:76] found id: "2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45"
	I0817 02:57:16.062653 1697275 cri.go:76] found id: "18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b"
	I0817 02:57:16.062678 1697275 cri.go:76] found id: "ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a"
	I0817 02:57:16.062682 1697275 cri.go:76] found id: "88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173"
	I0817 02:57:16.062686 1697275 cri.go:76] found id: "49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37"
	I0817 02:57:16.062697 1697275 cri.go:76] found id: "c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18"
	I0817 02:57:16.062702 1697275 cri.go:76] found id: ""
	I0817 02:57:16.062755 1697275 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:57:16.109242 1697275 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4","pid":5450,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4/rootfs","created":"2021-08-17T02:56:59.293588394Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_b4483bc5-0558-4d83-96e9-b61e6cb235ae"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c","pid":5148,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/07d92f497790799f3b040708207f63279b1d
be7222f3c1364c7dfa5d4ce6e64c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c/rootfs","created":"2021-08-17T02:56:57.067210328Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-5mnnj_6b672c7a-ea5e-4ef4-932c-95a01336037e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b","pid":4540,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b/rootfs","created":"2021-08-17T02:56:28.866202484Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernete
s.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18","pid":5053,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18/rootfs","created":"2021-08-17T02:56:56.776202432Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5","pid":5513,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ec11ed6f5492f78e08da43bd3
e78ad063d7bbe7c9de321c4d2654d555ecc3b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5/rootfs","created":"2021-08-17T02:56:59.102876254Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-67mmz_d68b6163-f479-44ce-b297-206cc3375f8f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b","pid":5730,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b/rootfs","created":"2021-08-17T02:56:59.904800393Z","annotations":{"io.kubernetes.cri.container-type":"sand
box","io.kubernetes.cri.sandbox-id":"208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-h5wgx_bea5fe6c-5029-44e1-b093-0759f8b51143"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45","pid":4547,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45/rootfs","created":"2021-08-17T02:56:28.871225556Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73
","pid":4421,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73/rootfs","created":"2021-08-17T02:56:28.514965366Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-20210817024852-1554185_bcc4f74b39f590f7090a76d147da96dd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601","pid":4336,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a4578b9de7d
2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601/rootfs","created":"2021-08-17T02:56:28.344495469Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210817024852-1554185_bf3eae5cc63964cb286fd79fd9930e06"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8","pid":5603,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8/rootfs","created":"2021-08-17T02:56:59.504655403Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.san
dbox-id":"0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173","pid":4468,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173/rootfs","created":"2021-08-17T02:56:28.61502173Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9","pid":5122,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9","rootfs":"
/run/containerd/io.containerd.runtime.v2.task/k8s.io/9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9/rootfs","created":"2021-08-17T02:56:56.947945759Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-jvbx9_c3ef7b0d-aa4d-431f-85c3-eec88c3223bc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b","pid":5226,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b/rootfs","created":"2021-08-17T02:56:57.309687618Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernete
s.cri.sandbox-id":"07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673","pid":5212,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673/rootfs","created":"2021-08-17T02:56:57.302408585Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d","pid":4396,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d","ro
otfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d/rootfs","created":"2021-08-17T02:56:28.464208362Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210817024852-1554185_926d01de76a01d6f6dd7a1be4ad00fed"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683","pid":5746,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683/rootfs","created":"2021-08-17T02:56:59.918357207Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kuber
netes.cri.sandbox-id":"ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-twxcq_1314b7d4-1f3d-489b-81c9-9e21210da53e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a","pid":4552,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a/rootfs","created":"2021-08-17T02:56:28.859588125Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4
d6781deea6e","pid":5006,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e/rootfs","created":"2021-08-17T02:56:56.533312223Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-8rfj4_c0122f9e-6b2a-4ee2-ae8a-3985bbc5160d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18","pid":5792,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890
ddb8b5daf044d97e18/rootfs","created":"2021-08-17T02:57:00.015987011Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba","pid":4407,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba/rootfs","created":"2021-08-17T02:56:28.464439572Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210817024852-1554185_b
4f49eaa0f2a5d62f2c809a5928fa926"},"owner":"root"}]
	I0817 02:57:16.109534 1697275 cri.go:113] list returned 20 containers
	I0817 02:57:16.109549 1697275 cri.go:116] container: {ID:0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4 Status:running}
	I0817 02:57:16.109567 1697275 cri.go:118] skipping 0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4 - not in ps
	I0817 02:57:16.109573 1697275 cri.go:116] container: {ID:07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c Status:running}
	I0817 02:57:16.109579 1697275 cri.go:118] skipping 07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c - not in ps
	I0817 02:57:16.109586 1697275 cri.go:116] container: {ID:18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b Status:running}
	I0817 02:57:16.109592 1697275 cri.go:116] container: {ID:1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18 Status:running}
	I0817 02:57:16.109602 1697275 cri.go:116] container: {ID:1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5 Status:running}
	I0817 02:57:16.109608 1697275 cri.go:118] skipping 1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5 - not in ps
	I0817 02:57:16.109612 1697275 cri.go:116] container: {ID:208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b Status:running}
	I0817 02:57:16.109621 1697275 cri.go:118] skipping 208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b - not in ps
	I0817 02:57:16.109625 1697275 cri.go:116] container: {ID:2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45 Status:running}
	I0817 02:57:16.109635 1697275 cri.go:116] container: {ID:4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73 Status:running}
	I0817 02:57:16.109640 1697275 cri.go:118] skipping 4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73 - not in ps
	I0817 02:57:16.109644 1697275 cri.go:116] container: {ID:4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601 Status:running}
	I0817 02:57:16.109649 1697275 cri.go:118] skipping 4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601 - not in ps
	I0817 02:57:16.109654 1697275 cri.go:116] container: {ID:69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8 Status:running}
	I0817 02:57:16.109662 1697275 cri.go:116] container: {ID:88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173 Status:running}
	I0817 02:57:16.109667 1697275 cri.go:116] container: {ID:9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9 Status:running}
	I0817 02:57:16.109678 1697275 cri.go:118] skipping 9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9 - not in ps
	I0817 02:57:16.109682 1697275 cri.go:116] container: {ID:99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b Status:running}
	I0817 02:57:16.109688 1697275 cri.go:116] container: {ID:9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673 Status:running}
	I0817 02:57:16.109696 1697275 cri.go:116] container: {ID:a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d Status:running}
	I0817 02:57:16.109702 1697275 cri.go:118] skipping a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d - not in ps
	I0817 02:57:16.109712 1697275 cri.go:116] container: {ID:ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683 Status:running}
	I0817 02:57:16.109718 1697275 cri.go:118] skipping ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683 - not in ps
	I0817 02:57:16.109722 1697275 cri.go:116] container: {ID:ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a Status:running}
	I0817 02:57:16.109731 1697275 cri.go:116] container: {ID:c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e Status:running}
	I0817 02:57:16.109736 1697275 cri.go:118] skipping c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e - not in ps
	I0817 02:57:16.109743 1697275 cri.go:116] container: {ID:c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18 Status:running}
	I0817 02:57:16.109749 1697275 cri.go:116] container: {ID:d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba Status:running}
	I0817 02:57:16.109760 1697275 cri.go:118] skipping d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba - not in ps
	I0817 02:57:16.109803 1697275 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b
	I0817 02:57:16.123435 1697275 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b 1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18
	I0817 02:57:16.135484 1697275 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b 1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T02:57:16Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0817 02:57:16.411820 1697275 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:57:16.420794 1697275 pause.go:50] kubelet running: false
	I0817 02:57:16.420838 1697275 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 02:57:16.542233 1697275 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 02:57:16.542300 1697275 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 02:57:16.610799 1697275 cri.go:76] found id: "69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8"
	I0817 02:57:16.610875 1697275 cri.go:76] found id: "99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b"
	I0817 02:57:16.610889 1697275 cri.go:76] found id: "9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673"
	I0817 02:57:16.610894 1697275 cri.go:76] found id: "1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18"
	I0817 02:57:16.610898 1697275 cri.go:76] found id: "2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45"
	I0817 02:57:16.610903 1697275 cri.go:76] found id: "18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b"
	I0817 02:57:16.610907 1697275 cri.go:76] found id: "ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a"
	I0817 02:57:16.610913 1697275 cri.go:76] found id: "88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173"
	I0817 02:57:16.610918 1697275 cri.go:76] found id: "49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37"
	I0817 02:57:16.610931 1697275 cri.go:76] found id: "c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18"
	I0817 02:57:16.610935 1697275 cri.go:76] found id: ""
	I0817 02:57:16.610996 1697275 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:57:16.653938 1697275 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4","pid":5450,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4/rootfs","created":"2021-08-17T02:56:59.293588394Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_b4483bc5-0558-4d83-96e9-b61e6cb235ae"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c","pid":5148,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/07d92f497790799f3b040708207f63279b1d
be7222f3c1364c7dfa5d4ce6e64c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c/rootfs","created":"2021-08-17T02:56:57.067210328Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-5mnnj_6b672c7a-ea5e-4ef4-932c-95a01336037e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b","pid":4540,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b/rootfs","created":"2021-08-17T02:56:28.866202484Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes
.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18","pid":5053,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18/rootfs","created":"2021-08-17T02:56:56.776202432Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5","pid":5513,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ec11ed6f5492f78e08da43bd3e
78ad063d7bbe7c9de321c4d2654d555ecc3b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5/rootfs","created":"2021-08-17T02:56:59.102876254Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-67mmz_d68b6163-f479-44ce-b297-206cc3375f8f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b","pid":5730,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b/rootfs","created":"2021-08-17T02:56:59.904800393Z","annotations":{"io.kubernetes.cri.container-type":"sandb
ox","io.kubernetes.cri.sandbox-id":"208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-h5wgx_bea5fe6c-5029-44e1-b093-0759f8b51143"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45","pid":4547,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45/rootfs","created":"2021-08-17T02:56:28.871225556Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73"
,"pid":4421,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73/rootfs","created":"2021-08-17T02:56:28.514965366Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-20210817024852-1554185_bcc4f74b39f590f7090a76d147da96dd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601","pid":4336,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a4578b9de7d2
a78c988e4afde91a927bf64416182c3acc126d3b9923e172601/rootfs","created":"2021-08-17T02:56:28.344495469Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210817024852-1554185_bf3eae5cc63964cb286fd79fd9930e06"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8","pid":5603,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8/rootfs","created":"2021-08-17T02:56:59.504655403Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sand
box-id":"0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173","pid":4468,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173/rootfs","created":"2021-08-17T02:56:28.61502173Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9","pid":5122,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9","rootfs":"/
run/containerd/io.containerd.runtime.v2.task/k8s.io/9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9/rootfs","created":"2021-08-17T02:56:56.947945759Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-jvbx9_c3ef7b0d-aa4d-431f-85c3-eec88c3223bc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b","pid":5226,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b/rootfs","created":"2021-08-17T02:56:57.309687618Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes
.cri.sandbox-id":"07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673","pid":5212,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673/rootfs","created":"2021-08-17T02:56:57.302408585Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d","pid":4396,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d","roo
tfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d/rootfs","created":"2021-08-17T02:56:28.464208362Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210817024852-1554185_926d01de76a01d6f6dd7a1be4ad00fed"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683","pid":5746,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683/rootfs","created":"2021-08-17T02:56:59.918357207Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubern
etes.cri.sandbox-id":"ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-twxcq_1314b7d4-1f3d-489b-81c9-9e21210da53e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a","pid":4552,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a/rootfs","created":"2021-08-17T02:56:28.859588125Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d
6781deea6e","pid":5006,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e/rootfs","created":"2021-08-17T02:56:56.533312223Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-8rfj4_c0122f9e-6b2a-4ee2-ae8a-3985bbc5160d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18","pid":5792,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890d
db8b5daf044d97e18/rootfs","created":"2021-08-17T02:57:00.015987011Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba","pid":4407,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba/rootfs","created":"2021-08-17T02:56:28.464439572Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210817024852-1554185_b4
f49eaa0f2a5d62f2c809a5928fa926"},"owner":"root"}]
	I0817 02:57:16.654187 1697275 cri.go:113] list returned 20 containers
	I0817 02:57:16.654198 1697275 cri.go:116] container: {ID:0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4 Status:running}
	I0817 02:57:16.654209 1697275 cri.go:118] skipping 0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4 - not in ps
	I0817 02:57:16.654217 1697275 cri.go:116] container: {ID:07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c Status:running}
	I0817 02:57:16.654223 1697275 cri.go:118] skipping 07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c - not in ps
	I0817 02:57:16.654234 1697275 cri.go:116] container: {ID:18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b Status:paused}
	I0817 02:57:16.654240 1697275 cri.go:122] skipping {18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b paused}: state = "paused", want "running"
	I0817 02:57:16.654254 1697275 cri.go:116] container: {ID:1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18 Status:running}
	I0817 02:57:16.654259 1697275 cri.go:116] container: {ID:1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5 Status:running}
	I0817 02:57:16.654269 1697275 cri.go:118] skipping 1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5 - not in ps
	I0817 02:57:16.654273 1697275 cri.go:116] container: {ID:208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b Status:running}
	I0817 02:57:16.654282 1697275 cri.go:118] skipping 208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b - not in ps
	I0817 02:57:16.654286 1697275 cri.go:116] container: {ID:2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45 Status:running}
	I0817 02:57:16.654293 1697275 cri.go:116] container: {ID:4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73 Status:running}
	I0817 02:57:16.654298 1697275 cri.go:118] skipping 4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73 - not in ps
	I0817 02:57:16.654306 1697275 cri.go:116] container: {ID:4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601 Status:running}
	I0817 02:57:16.654311 1697275 cri.go:118] skipping 4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601 - not in ps
	I0817 02:57:16.654322 1697275 cri.go:116] container: {ID:69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8 Status:running}
	I0817 02:57:16.654329 1697275 cri.go:116] container: {ID:88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173 Status:running}
	I0817 02:57:16.654338 1697275 cri.go:116] container: {ID:9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9 Status:running}
	I0817 02:57:16.654344 1697275 cri.go:118] skipping 9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9 - not in ps
	I0817 02:57:16.654352 1697275 cri.go:116] container: {ID:99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b Status:running}
	I0817 02:57:16.654358 1697275 cri.go:116] container: {ID:9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673 Status:running}
	I0817 02:57:16.654367 1697275 cri.go:116] container: {ID:a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d Status:running}
	I0817 02:57:16.654373 1697275 cri.go:118] skipping a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d - not in ps
	I0817 02:57:16.654377 1697275 cri.go:116] container: {ID:ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683 Status:running}
	I0817 02:57:16.654386 1697275 cri.go:118] skipping ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683 - not in ps
	I0817 02:57:16.654391 1697275 cri.go:116] container: {ID:ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a Status:running}
	I0817 02:57:16.654399 1697275 cri.go:116] container: {ID:c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e Status:running}
	I0817 02:57:16.654404 1697275 cri.go:118] skipping c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e - not in ps
	I0817 02:57:16.654416 1697275 cri.go:116] container: {ID:c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18 Status:running}
	I0817 02:57:16.654421 1697275 cri.go:116] container: {ID:d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba Status:running}
	I0817 02:57:16.654427 1697275 cri.go:118] skipping d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba - not in ps
	I0817 02:57:16.654474 1697275 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18
	I0817 02:57:16.667602 1697275 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18 2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45
	I0817 02:57:16.679544 1697275 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18 2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T02:57:16Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0817 02:57:17.220929 1697275 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:57:17.230096 1697275 pause.go:50] kubelet running: false
	I0817 02:57:17.230176 1697275 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 02:57:17.354264 1697275 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 02:57:17.354337 1697275 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 02:57:17.424069 1697275 cri.go:76] found id: "69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8"
	I0817 02:57:17.424092 1697275 cri.go:76] found id: "99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b"
	I0817 02:57:17.424098 1697275 cri.go:76] found id: "9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673"
	I0817 02:57:17.424103 1697275 cri.go:76] found id: "1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18"
	I0817 02:57:17.424107 1697275 cri.go:76] found id: "2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45"
	I0817 02:57:17.424112 1697275 cri.go:76] found id: "18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b"
	I0817 02:57:17.424121 1697275 cri.go:76] found id: "ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a"
	I0817 02:57:17.424126 1697275 cri.go:76] found id: "88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173"
	I0817 02:57:17.424138 1697275 cri.go:76] found id: "49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37"
	I0817 02:57:17.424145 1697275 cri.go:76] found id: "c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18"
	I0817 02:57:17.424152 1697275 cri.go:76] found id: ""
	I0817 02:57:17.424194 1697275 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:57:17.467747 1697275 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4","pid":5450,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4/rootfs","created":"2021-08-17T02:56:59.293588394Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_b4483bc5-0558-4d83-96e9-b61e6cb235ae"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c","pid":5148,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/07d92f497790799f3b040708207f63279b1d
be7222f3c1364c7dfa5d4ce6e64c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c/rootfs","created":"2021-08-17T02:56:57.067210328Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-5mnnj_6b672c7a-ea5e-4ef4-932c-95a01336037e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b","pid":4540,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b/rootfs","created":"2021-08-17T02:56:28.866202484Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes
.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18","pid":5053,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18/rootfs","created":"2021-08-17T02:56:56.776202432Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5","pid":5513,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ec11ed6f5492f78e08da43bd3e7
8ad063d7bbe7c9de321c4d2654d555ecc3b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5/rootfs","created":"2021-08-17T02:56:59.102876254Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-67mmz_d68b6163-f479-44ce-b297-206cc3375f8f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b","pid":5730,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b/rootfs","created":"2021-08-17T02:56:59.904800393Z","annotations":{"io.kubernetes.cri.container-type":"sandbo
x","io.kubernetes.cri.sandbox-id":"208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-h5wgx_bea5fe6c-5029-44e1-b093-0759f8b51143"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45","pid":4547,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45/rootfs","created":"2021-08-17T02:56:28.871225556Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73",
"pid":4421,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73/rootfs","created":"2021-08-17T02:56:28.514965366Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-20210817024852-1554185_bcc4f74b39f590f7090a76d147da96dd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601","pid":4336,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a4578b9de7d2a
78c988e4afde91a927bf64416182c3acc126d3b9923e172601/rootfs","created":"2021-08-17T02:56:28.344495469Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210817024852-1554185_bf3eae5cc63964cb286fd79fd9930e06"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8","pid":5603,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8/rootfs","created":"2021-08-17T02:56:59.504655403Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandb
ox-id":"0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173","pid":4468,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173/rootfs","created":"2021-08-17T02:56:28.61502173Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9","pid":5122,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9","rootfs":"/r
un/containerd/io.containerd.runtime.v2.task/k8s.io/9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9/rootfs","created":"2021-08-17T02:56:56.947945759Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-jvbx9_c3ef7b0d-aa4d-431f-85c3-eec88c3223bc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b","pid":5226,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b/rootfs","created":"2021-08-17T02:56:57.309687618Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.
cri.sandbox-id":"07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673","pid":5212,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673/rootfs","created":"2021-08-17T02:56:57.302408585Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d","pid":4396,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d","root
fs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d/rootfs","created":"2021-08-17T02:56:28.464208362Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210817024852-1554185_926d01de76a01d6f6dd7a1be4ad00fed"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683","pid":5746,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683/rootfs","created":"2021-08-17T02:56:59.918357207Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kuberne
tes.cri.sandbox-id":"ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-twxcq_1314b7d4-1f3d-489b-81c9-9e21210da53e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a","pid":4552,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a/rootfs","created":"2021-08-17T02:56:28.859588125Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6
781deea6e","pid":5006,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e/rootfs","created":"2021-08-17T02:56:56.533312223Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-8rfj4_c0122f9e-6b2a-4ee2-ae8a-3985bbc5160d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18","pid":5792,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890dd
b8b5daf044d97e18/rootfs","created":"2021-08-17T02:57:00.015987011Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba","pid":4407,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba/rootfs","created":"2021-08-17T02:56:28.464439572Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210817024852-1554185_b4f
49eaa0f2a5d62f2c809a5928fa926"},"owner":"root"}]
	I0817 02:57:17.467972 1697275 cri.go:113] list returned 20 containers
	I0817 02:57:17.467989 1697275 cri.go:116] container: {ID:0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4 Status:running}
	I0817 02:57:17.467999 1697275 cri.go:118] skipping 0506d9abb91c564b6afc2545d948fc8860ed099ce8b72268033a712adbfbebc4 - not in ps
	I0817 02:57:17.468006 1697275 cri.go:116] container: {ID:07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c Status:running}
	I0817 02:57:17.468014 1697275 cri.go:118] skipping 07d92f497790799f3b040708207f63279b1dbe7222f3c1364c7dfa5d4ce6e64c - not in ps
	I0817 02:57:17.468019 1697275 cri.go:116] container: {ID:18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b Status:paused}
	I0817 02:57:17.468029 1697275 cri.go:122] skipping {18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b paused}: state = "paused", want "running"
	I0817 02:57:17.468039 1697275 cri.go:116] container: {ID:1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18 Status:paused}
	I0817 02:57:17.468046 1697275 cri.go:122] skipping {1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18 paused}: state = "paused", want "running"
	I0817 02:57:17.468059 1697275 cri.go:116] container: {ID:1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5 Status:running}
	I0817 02:57:17.468065 1697275 cri.go:118] skipping 1ec11ed6f5492f78e08da43bd3e78ad063d7bbe7c9de321c4d2654d555ecc3b5 - not in ps
	I0817 02:57:17.468069 1697275 cri.go:116] container: {ID:208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b Status:running}
	I0817 02:57:17.468079 1697275 cri.go:118] skipping 208e67c0b68d2282df8a6c22d45a8abcd82c7b0be457647699973fbabf4ef81b - not in ps
	I0817 02:57:17.468083 1697275 cri.go:116] container: {ID:2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45 Status:running}
	I0817 02:57:17.468090 1697275 cri.go:116] container: {ID:4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73 Status:running}
	I0817 02:57:17.468095 1697275 cri.go:118] skipping 4275fafdcad20037d0ebbcbf6b7b3690c56bdfd21214dfbb0edf71ffc2d22a73 - not in ps
	I0817 02:57:17.468103 1697275 cri.go:116] container: {ID:4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601 Status:running}
	I0817 02:57:17.468109 1697275 cri.go:118] skipping 4a4578b9de7d2a78c988e4afde91a927bf64416182c3acc126d3b9923e172601 - not in ps
	I0817 02:57:17.468117 1697275 cri.go:116] container: {ID:69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8 Status:running}
	I0817 02:57:17.468122 1697275 cri.go:116] container: {ID:88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173 Status:running}
	I0817 02:57:17.468127 1697275 cri.go:116] container: {ID:9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9 Status:running}
	I0817 02:57:17.468133 1697275 cri.go:118] skipping 9682c574efcad8ffac800ec96a4b7800c463481bf45b26b20c50545a206488f9 - not in ps
	I0817 02:57:17.468140 1697275 cri.go:116] container: {ID:99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b Status:running}
	I0817 02:57:17.468145 1697275 cri.go:116] container: {ID:9c7e6513031e2f2d68d3377544337f86e0b129d821f77c549e88f98ef3806673 Status:running}
	I0817 02:57:17.468156 1697275 cri.go:116] container: {ID:a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d Status:running}
	I0817 02:57:17.468161 1697275 cri.go:118] skipping a279c06813b2dae78dce6ec1c1d56d6352ebc8e1892ef7ee3c4f2edad725ea3d - not in ps
	I0817 02:57:17.468165 1697275 cri.go:116] container: {ID:ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683 Status:running}
	I0817 02:57:17.468176 1697275 cri.go:118] skipping ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683 - not in ps
	I0817 02:57:17.468180 1697275 cri.go:116] container: {ID:ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a Status:running}
	I0817 02:57:17.468188 1697275 cri.go:116] container: {ID:c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e Status:running}
	I0817 02:57:17.468194 1697275 cri.go:118] skipping c4296c2c68eef55809cce50f506f0ff5cf5e96632239867d5fda4d6781deea6e - not in ps
	I0817 02:57:17.468204 1697275 cri.go:116] container: {ID:c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18 Status:running}
	I0817 02:57:17.468209 1697275 cri.go:116] container: {ID:d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba Status:running}
	I0817 02:57:17.468214 1697275 cri.go:118] skipping d9d56919ab979d4f1040019b5469a1fab255e786674269c55971d3ddd2afdbba - not in ps
	I0817 02:57:17.468259 1697275 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45
	I0817 02:57:17.481210 1697275 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45 69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8
	I0817 02:57:17.495637 1697275 out.go:177] 
	W0817 02:57:17.495782 1697275 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45 69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T02:57:17Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45 69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T02:57:17Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0817 02:57:17.495799 1697275 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0817 02:57:17.505005 1697275 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_2.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_2.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0817 02:57:17.506521 1697275 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-arm64 pause -p default-k8s-different-port-20210817024852-1554185 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20210817024852-1554185
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20210817024852-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "014007154d015ac8e8091bdc467522abde7243f660feaef4ce6e550a017a3a15",
	        "Created": "2021-08-17T02:48:53.903905113Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1686164,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T02:51:25.232423109Z",
	            "FinishedAt": "2021-08-17T02:51:23.69333619Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/014007154d015ac8e8091bdc467522abde7243f660feaef4ce6e550a017a3a15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/014007154d015ac8e8091bdc467522abde7243f660feaef4ce6e550a017a3a15/hostname",
	        "HostsPath": "/var/lib/docker/containers/014007154d015ac8e8091bdc467522abde7243f660feaef4ce6e550a017a3a15/hosts",
	        "LogPath": "/var/lib/docker/containers/014007154d015ac8e8091bdc467522abde7243f660feaef4ce6e550a017a3a15/014007154d015ac8e8091bdc467522abde7243f660feaef4ce6e550a017a3a15-json.log",
	        "Name": "/default-k8s-different-port-20210817024852-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20210817024852-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20210817024852-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/eb8af045cee76116725a046b85b3b0ae49569752ba414ad4365051160f03a64c-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb8af045cee76116725a046b85b3b0ae49569752ba414ad4365051160f03a64c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb8af045cee76116725a046b85b3b0ae49569752ba414ad4365051160f03a64c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb8af045cee76116725a046b85b3b0ae49569752ba414ad4365051160f03a64c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210817024852-1554185",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210817024852-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210817024852-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210817024852-1554185",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210817024852-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e033c7cdd54e2e8a1b304b25fd2480d92ef219b4f58e99aa66ff497a5733fe77",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50469"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50471"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50470"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e033c7cdd54e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20210817024852-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "014007154d01",
	                        "default-k8s-different-port-20210817024852-1554185"
	                    ],
	                    "NetworkID": "c01a2de90263b8c4a0f0d301a1f3067482e97e685086a463971911b51d3b8270",
	                    "EndpointID": "77f5da90ca9fa3abde61c8a1c95c0bd17d1bf917f80b53a0c6874d8f261f500b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210817024852-1554185 -n default-k8s-different-port-20210817024852-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210817024852-1554185 -n default-k8s-different-port-20210817024852-1554185: exit status 2 (14.575792105s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 02:57:32.130584 1697577 status.go:422] Error apiserver status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-different-port-20210817024852-1554185 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 -p default-k8s-different-port-20210817024852-1554185 logs -n 25: exit status 110 (15.779016503s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                      Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | force-systemd-env-20210817024449-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:49 UTC | Tue, 17 Aug 2021 02:46:27 UTC |
	|         | force-systemd-env-20210817024449-1554185          |                                                   |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | -v=5 --driver=docker                              |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| -p      | force-systemd-env-20210817024449-1554185          | force-systemd-env-20210817024449-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:46:27 UTC | Tue, 17 Aug 2021 02:46:28 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                                   |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-env-20210817024449-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:46:28 UTC | Tue, 17 Aug 2021 02:46:30 UTC |
	|         | force-systemd-env-20210817024449-1554185          |                                                   |         |         |                               |                               |
	| delete  | -p                                                | kubenet-20210817024630-1554185                    | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:46:30 UTC | Tue, 17 Aug 2021 02:46:30 UTC |
	|         | kubenet-20210817024630-1554185                    |                                                   |         |         |                               |                               |
	| delete  | -p                                                | flannel-20210817024630-1554185                    | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:46:30 UTC | Tue, 17 Aug 2021 02:46:31 UTC |
	|         | flannel-20210817024630-1554185                    |                                                   |         |         |                               |                               |
	| delete  | -p                                                | false-20210817024631-1554185                      | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:46:31 UTC | Tue, 17 Aug 2021 02:46:31 UTC |
	|         | false-20210817024631-1554185                      |                                                   |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210817024307-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:41 UTC | Tue, 17 Aug 2021 02:47:20 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185         |                                                   |         |         |                               |                               |
	|         | --memory=2200                                     |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| start   | -p                                                | force-systemd-flag-20210817024631-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:46:31 UTC | Tue, 17 Aug 2021 02:47:24 UTC |
	|         | force-systemd-flag-20210817024631-1554185         |                                                   |         |         |                               |                               |
	|         | --memory=2048 --force-systemd                     |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker            |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| -p      | force-systemd-flag-20210817024631-1554185         | force-systemd-flag-20210817024631-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:25 UTC | Tue, 17 Aug 2021 02:47:25 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                                   |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-flag-20210817024631-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:25 UTC | Tue, 17 Aug 2021 02:47:28 UTC |
	|         | force-systemd-flag-20210817024631-1554185         |                                                   |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210817024307-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:20 UTC | Tue, 17 Aug 2021 02:48:02 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185         |                                                   |         |         |                               |                               |
	|         | --memory=2200                                     |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210817024307-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:02 UTC | Tue, 17 Aug 2021 02:48:05 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185         |                                                   |         |         |                               |                               |
	| start   | -p                                                | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:28 UTC | Tue, 17 Aug 2021 02:48:49 UTC |
	|         | cert-options-20210817024728-1554185               |                                                   |         |         |                               |                               |
	|         | --memory=2048                                     |                                                   |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                                   |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                                   |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                                   |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| -p      | cert-options-20210817024728-1554185               | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:49 UTC | Tue, 17 Aug 2021 02:48:49 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                                   |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                                   |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:49 UTC | Tue, 17 Aug 2021 02:48:52 UTC |
	|         | cert-options-20210817024728-1554185               |                                                   |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:05 UTC | Tue, 17 Aug 2021 02:50:20 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                   |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                   |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                   |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:29 UTC | Tue, 17 Aug 2021 02:50:29 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:30 UTC | Tue, 17 Aug 2021 02:50:50 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:50 UTC | Tue, 17 Aug 2021 02:50:50 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:52 UTC | Tue, 17 Aug 2021 02:50:54 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:02 UTC | Tue, 17 Aug 2021 02:51:03 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:03 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:57:04 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:57:15 UTC | Tue, 17 Aug 2021 02:57:15 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 02:51:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 02:51:24.251043 1685977 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:51:24.251152 1685977 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:51:24.251169 1685977 out.go:311] Setting ErrFile to fd 2...
	I0817 02:51:24.251197 1685977 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:51:24.251379 1685977 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:51:24.251651 1685977 out.go:305] Setting JSON to false
	I0817 02:51:24.252664 1685977 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38023,"bootTime":1629130662,"procs":404,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 02:51:24.252748 1685977 start.go:121] virtualization:  
	I0817 02:51:24.256768 1685977 out.go:177] * [default-k8s-different-port-20210817024852-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 02:51:24.256906 1685977 notify.go:169] Checking for updates...
	I0817 02:51:24.259618 1685977 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 02:51:24.261360 1685977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:51:24.263335 1685977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 02:51:24.264950 1685977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 02:51:24.272522 1685977 config.go:177] Loaded profile config "default-k8s-different-port-20210817024852-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:51:24.273311 1685977 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 02:51:24.334374 1685977 docker.go:132] docker version: linux-20.10.8
	I0817 02:51:24.334455 1685977 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:51:24.474543 1685977 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 02:51:24.397405173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:51:24.474647 1685977 docker.go:244] overlay module found
	I0817 02:51:24.476523 1685977 out.go:177] * Using the docker driver based on existing profile
	I0817 02:51:24.476541 1685977 start.go:278] selected driver: docker
	I0817 02:51:24.476547 1685977 start.go:751] validating driver "docker" against &{Name:default-k8s-different-port-20210817024852-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-2021081702485
2-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true sys
tem_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:51:24.476648 1685977 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 02:51:24.476680 1685977 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 02:51:24.476692 1685977 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 02:51:24.478265 1685977 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 02:51:24.478570 1685977 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:51:24.588046 1685977 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 02:51:24.514296213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	W0817 02:51:24.588196 1685977 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 02:51:24.588217 1685977 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 02:51:24.590004 1685977 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 02:51:24.590103 1685977 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 02:51:24.590126 1685977 cni.go:93] Creating CNI manager for ""
	I0817 02:51:24.590133 1685977 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:51:24.590143 1685977 start_flags.go:277] config:
	{Name:default-k8s-different-port-20210817024852-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210817024852-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:51:24.593113 1685977 out.go:177] * Starting control plane node default-k8s-different-port-20210817024852-1554185 in cluster default-k8s-different-port-20210817024852-1554185
	I0817 02:51:24.593147 1685977 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 02:51:24.594951 1685977 out.go:177] * Pulling base image ...
	I0817 02:51:24.594981 1685977 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:51:24.595015 1685977 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 02:51:24.595024 1685977 cache.go:56] Caching tarball of preloaded images
	I0817 02:51:24.595028 1685977 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 02:51:24.595158 1685977 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 02:51:24.595167 1685977 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 02:51:24.595281 1685977 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/config.json ...
	I0817 02:51:24.658896 1685977 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 02:51:24.658919 1685977 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 02:51:24.658935 1685977 cache.go:205] Successfully downloaded all kic artifacts
	I0817 02:51:24.658973 1685977 start.go:313] acquiring machines lock for default-k8s-different-port-20210817024852-1554185: {Name:mka04c3640b00539ca31a06f35f3a83f2a32db60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 02:51:24.659063 1685977 start.go:317] acquired machines lock for "default-k8s-different-port-20210817024852-1554185" in 68.496µs
	I0817 02:51:24.659084 1685977 start.go:93] Skipping create...Using existing machine configuration
	I0817 02:51:24.659090 1685977 fix.go:55] fixHost starting: 
	I0817 02:51:24.659382 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:51:24.710397 1685977 fix.go:108] recreateIfNeeded on default-k8s-different-port-20210817024852-1554185: state=Stopped err=<nil>
	W0817 02:51:24.710419 1685977 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 02:51:21.173334 1683677 api_server.go:255] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 02:51:21.672990 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:25.474293 1683677 api_server.go:265] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 02:51:25.474312 1683677 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 02:51:25.672640 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:25.773285 1683677 api_server.go:265] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 02:51:25.773350 1683677 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 02:51:26.172489 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:26.181455 1683677 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0817 02:51:26.181515 1683677 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0817 02:51:26.672039 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:26.684988 1683677 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0817 02:51:26.699432 1683677 api_server.go:139] control plane version: v1.14.0
	I0817 02:51:26.699470 1683677 api_server.go:129] duration metric: took 11.027864328s to wait for apiserver health ...
	I0817 02:51:26.699506 1683677 cni.go:93] Creating CNI manager for ""
	I0817 02:51:26.699525 1683677 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:51:24.713519 1685977 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20210817024852-1554185" ...
	I0817 02:51:24.713573 1685977 cli_runner.go:115] Run: docker start default-k8s-different-port-20210817024852-1554185
	I0817 02:51:25.240385 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:51:25.288731 1685977 kic.go:420] container "default-k8s-different-port-20210817024852-1554185" state is running.
	I0817 02:51:25.289954 1685977 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:25.342199 1685977 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/config.json ...
	I0817 02:51:25.342375 1685977 machine.go:88] provisioning docker machine ...
	I0817 02:51:25.342401 1685977 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20210817024852-1554185"
	I0817 02:51:25.342456 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:25.405467 1685977 main.go:130] libmachine: Using SSH client type: native
	I0817 02:51:25.405642 1685977 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50473 <nil> <nil>}
	I0817 02:51:25.405663 1685977 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210817024852-1554185 && echo "default-k8s-different-port-20210817024852-1554185" | sudo tee /etc/hostname
	I0817 02:51:25.406301 1685977 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48612->127.0.0.1:50473: read: connection reset by peer
	I0817 02:51:28.539738 1685977 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210817024852-1554185
	
	I0817 02:51:28.539847 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:28.584321 1685977 main.go:130] libmachine: Using SSH client type: native
	I0817 02:51:28.584494 1685977 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50473 <nil> <nil>}
	I0817 02:51:28.584525 1685977 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210817024852-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210817024852-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210817024852-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 02:51:28.701964 1685977 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 02:51:28.701987 1685977 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 02:51:28.702012 1685977 ubuntu.go:177] setting up certificates
	I0817 02:51:28.702027 1685977 provision.go:83] configureAuth start
	I0817 02:51:28.702083 1685977 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:28.735488 1685977 provision.go:138] copyHostCerts
	I0817 02:51:28.735544 1685977 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 02:51:28.735556 1685977 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 02:51:28.735612 1685977 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 02:51:28.735688 1685977 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 02:51:28.735699 1685977 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 02:51:28.735721 1685977 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 02:51:28.735769 1685977 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 02:51:28.735780 1685977 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 02:51:28.735804 1685977 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 02:51:28.735842 1685977 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210817024852-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20210817024852-1554185]
	I0817 02:51:26.701474 1683677 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:51:26.701537 1683677 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:51:26.704950 1683677 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0817 02:51:26.704967 1683677 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:51:26.716527 1683677 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:51:26.967205 1683677 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:51:26.979532 1683677 system_pods.go:59] 8 kube-system pods found
	I0817 02:51:26.979567 1683677 system_pods.go:61] "coredns-fb8b8dccf-jp8m9" [b23bfd69-ff05-11eb-b750-02420e977974] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 02:51:26.979574 1683677 system_pods.go:61] "etcd-old-k8s-version-20210817024805-1554185" [d5b20d31-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979604 1683677 system_pods.go:61] "kindnet-n5vgl" [b2493cdc-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979609 1683677 system_pods.go:61] "kube-apiserver-old-k8s-version-20210817024805-1554185" [d8145a64-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979621 1683677 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210817024805-1554185" [cfbc7b8c-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979625 1683677 system_pods.go:61] "kube-proxy-nhh5q" [b248f49f-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979630 1683677 system_pods.go:61] "kube-scheduler-old-k8s-version-20210817024805-1554185" [d2b8571e-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979640 1683677 system_pods.go:61] "storage-provisioner" [b32ef806-ff05-11eb-b750-02420e977974] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 02:51:26.979647 1683677 system_pods.go:74] duration metric: took 12.42679ms to wait for pod list to return data ...
	I0817 02:51:26.979670 1683677 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:51:26.982872 1683677 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:51:26.982920 1683677 node_conditions.go:123] node cpu capacity is 2
	I0817 02:51:26.982944 1683677 node_conditions.go:105] duration metric: took 3.261441ms to run NodePressure ...
	I0817 02:51:26.982957 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:27.127614 1683677 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0817 02:51:27.130997 1683677 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0817 02:51:27.503592 1683677 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0817 02:51:27.944862 1683677 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0817 02:51:28.477064 1683677 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0817 02:51:29.262325 1683677 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0817 02:51:29.288766 1685977 provision.go:172] copyRemoteCerts
	I0817 02:51:29.288834 1685977 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 02:51:29.288876 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:29.320404 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:51:29.408982 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 02:51:29.424314 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1314 bytes)
	I0817 02:51:29.440413 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 02:51:29.455873 1685977 provision.go:86] duration metric: configureAuth took 753.829649ms
	I0817 02:51:29.455889 1685977 ubuntu.go:193] setting minikube options for container-runtime
	I0817 02:51:29.456060 1685977 config.go:177] Loaded profile config "default-k8s-different-port-20210817024852-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:51:29.456076 1685977 machine.go:91] provisioned docker machine in 4.113685256s
	I0817 02:51:29.456082 1685977 start.go:267] post-start starting for "default-k8s-different-port-20210817024852-1554185" (driver="docker")
	I0817 02:51:29.456089 1685977 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 02:51:29.456133 1685977 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 02:51:29.456173 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:29.503542 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:51:29.589531 1685977 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 02:51:29.591999 1685977 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 02:51:29.592021 1685977 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 02:51:29.592034 1685977 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 02:51:29.592042 1685977 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 02:51:29.592056 1685977 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 02:51:29.592103 1685977 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 02:51:29.592183 1685977 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 02:51:29.592274 1685977 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 02:51:29.598386 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:51:29.613754 1685977 start.go:270] post-start completed in 157.661872ms
	I0817 02:51:29.613804 1685977 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 02:51:29.613842 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:29.647542 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:51:29.730400 1685977 fix.go:57] fixHost completed within 5.071305921s
	I0817 02:51:29.730418 1685977 start.go:80] releasing machines lock for "default-k8s-different-port-20210817024852-1554185", held for 5.071346118s
	I0817 02:51:29.730493 1685977 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:29.762515 1685977 ssh_runner.go:149] Run: systemctl --version
	I0817 02:51:29.762564 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:29.762576 1685977 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 02:51:29.762626 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:29.803101 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:51:29.808589 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:51:30.024912 1685977 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 02:51:30.036264 1685977 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 02:51:30.044957 1685977 docker.go:153] disabling docker service ...
	I0817 02:51:30.045003 1685977 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 02:51:30.054220 1685977 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 02:51:30.063671 1685977 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 02:51:30.138573 1685977 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 02:51:30.222832 1685977 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 02:51:30.231509 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 02:51:30.242655 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 02:51:30.254611 1685977 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 02:51:30.261968 1685977 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 02:51:30.267422 1685977 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 02:51:30.345903 1685977 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 02:51:30.420998 1685977 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 02:51:30.421089 1685977 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 02:51:30.424443 1685977 start.go:413] Will wait 60s for crictl version
	I0817 02:51:30.424503 1685977 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:51:30.448673 1685977 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T02:51:30Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 02:51:30.768179 1683677 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0817 02:51:31.845796 1683677 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0817 02:51:33.718958 1683677 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0817 02:51:36.272473 1683677 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0817 02:51:41.495445 1685977 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:51:41.518517 1685977 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 02:51:41.518571 1685977 ssh_runner.go:149] Run: containerd --version
	I0817 02:51:41.542471 1685977 ssh_runner.go:149] Run: containerd --version
	I0817 02:51:41.564932 1685977 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 02:51:41.565000 1685977 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210817024852-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 02:51:41.596579 1685977 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 02:51:41.599650 1685977 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 02:51:41.607901 1685977 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:51:41.607960 1685977 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:51:41.633436 1685977 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:51:41.633455 1685977 containerd.go:517] Images already preloaded, skipping extraction
	I0817 02:51:41.633495 1685977 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:51:41.656240 1685977 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:51:41.656259 1685977 cache_images.go:74] Images are preloaded, skipping loading
	I0817 02:51:41.656298 1685977 ssh_runner.go:149] Run: sudo crictl info
	I0817 02:51:41.677901 1685977 cni.go:93] Creating CNI manager for ""
	I0817 02:51:41.677922 1685977 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:51:41.677934 1685977 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 02:51:41.677948 1685977 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210817024852-1554185 NodeName:default-k8s-different-port-20210817024852-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.16
8.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 02:51:41.678072 1685977 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210817024852-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 02:51:41.678162 1685977 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210817024852-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210817024852-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0817 02:51:41.678216 1685977 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 02:51:41.686039 1685977 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 02:51:41.686087 1685977 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 02:51:41.696253 1685977 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (594 bytes)
	I0817 02:51:41.707800 1685977 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 02:51:41.719235 1685977 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0817 02:51:41.730312 1685977 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 02:51:41.733438 1685977 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 02:51:41.741372 1685977 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185 for IP: 192.168.49.2
	I0817 02:51:41.741439 1685977 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 02:51:41.741463 1685977 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 02:51:41.741513 1685977 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.key
	I0817 02:51:41.741533 1685977 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/apiserver.key.dd3b5fb2
	I0817 02:51:41.741551 1685977 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/proxy-client.key
	I0817 02:51:41.741665 1685977 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 02:51:41.741714 1685977 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 02:51:41.741729 1685977 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 02:51:41.741757 1685977 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 02:51:41.741788 1685977 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 02:51:41.741815 1685977 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 02:51:41.741874 1685977 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:51:41.743037 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 02:51:41.757778 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 02:51:41.772662 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 02:51:41.788238 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 02:51:41.803479 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 02:51:41.818405 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 02:51:41.834290 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 02:51:41.850257 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 02:51:41.865564 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 02:51:41.880436 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 02:51:41.896728 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 02:51:41.911439 1685977 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 02:51:41.922240 1685977 ssh_runner.go:149] Run: openssl version
	I0817 02:51:41.926694 1685977 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 02:51:41.933350 1685977 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 02:51:41.937922 1685977 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 02:51:41.937983 1685977 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 02:51:41.943999 1685977 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 02:51:41.951862 1685977 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 02:51:41.958563 1685977 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 02:51:41.961196 1685977 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 02:51:41.961239 1685977 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 02:51:41.965518 1685977 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 02:51:41.971667 1685977 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 02:51:41.977893 1685977 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:51:41.980824 1685977 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:51:41.980862 1685977 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:51:41.986962 1685977 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 02:51:41.992849 1685977 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210817024852-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210817024852-1554185 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Start
HostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:51:41.992963 1685977 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 02:51:41.993009 1685977 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:51:42.016929 1685977 cri.go:76] found id: "f19ee84b7c94fa694cc46aa4f13704d95553f850e7887991adc2814948d63f41"
	I0817 02:51:42.016949 1685977 cri.go:76] found id: "49df9fab49f87f7e2113469f38d8dabd5ae0608b25a7beb6a20fd67c7c539d05"
	I0817 02:51:42.016955 1685977 cri.go:76] found id: "78c4325f16b23a8e0c2de1c6bab0242fc4abb3c1f4f067ad4d53cc19f9d8c6d3"
	I0817 02:51:42.016959 1685977 cri.go:76] found id: "8ff2e923d4fececca9e36feac69692fefdc6c915178679880a20c2a1d0956edf"
	I0817 02:51:42.016963 1685977 cri.go:76] found id: "708b84fed7b61b21afa376ef8807e544b39450abc93c611d6f112ac4ff06f48e"
	I0817 02:51:42.016969 1685977 cri.go:76] found id: "7278496269401b57811d1a6760d5898522e77d3d73e46421d9bc1e3dd87be48d"
	I0817 02:51:42.016973 1685977 cri.go:76] found id: "41834b1c0478bcecbd69b2ef8b1d5d654426af1909f04dbc4b219acef4a2ecd0"
	I0817 02:51:42.016982 1685977 cri.go:76] found id: "0daab25e4445c73484cec64d8d72127d3a5420b8b54c70d8a155b6fc50297375"
	I0817 02:51:42.016987 1685977 cri.go:76] found id: ""
	I0817 02:51:42.017034 1685977 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:51:42.029655 1685977 cri.go:103] JSON = null
	W0817 02:51:42.029691 1685977 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0817 02:51:42.029754 1685977 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 02:51:42.037228 1685977 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 02:51:42.037248 1685977 kubeadm.go:600] restartCluster start
	I0817 02:51:42.037283 1685977 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 02:51:42.042889 1685977 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:42.043683 1685977 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210817024852-1554185" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:51:42.043908 1685977 kubeconfig.go:128] "default-k8s-different-port-20210817024852-1554185" context is missing from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0817 02:51:42.045048 1685977 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:51:42.048853 1685977 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 02:51:42.055696 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:42.055736 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:42.064895 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:42.265222 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:42.265292 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:42.274662 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:42.465981 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:42.466063 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:42.475766 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:42.665067 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:42.665149 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:42.675438 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:42.865606 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:42.865709 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:42.876424 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:43.065771 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:43.065878 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:43.081119 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:43.265472 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:43.265563 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:43.277132 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:43.465453 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:43.465540 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:43.475586 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:43.665924 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:43.666000 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:43.675088 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:43.865406 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:43.865477 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:43.875308 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:44.065630 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:44.065717 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:44.075143 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:41.408402 1683677 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0817 02:51:44.265470 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:44.265569 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:44.274785 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:44.465030 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:44.465084 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:44.474244 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:44.665583 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:44.665667 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:44.674961 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:44.865219 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:44.865274 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:44.874844 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:45.065022 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:45.065109 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:45.075997 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:45.076015 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:45.076059 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:45.085677 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:45.085695 1685977 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0817 02:51:45.085702 1685977 kubeadm.go:1032] stopping kube-system containers ...
	I0817 02:51:45.085712 1685977 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 02:51:45.085754 1685977 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:51:45.110329 1685977 cri.go:76] found id: "f19ee84b7c94fa694cc46aa4f13704d95553f850e7887991adc2814948d63f41"
	I0817 02:51:45.110349 1685977 cri.go:76] found id: "49df9fab49f87f7e2113469f38d8dabd5ae0608b25a7beb6a20fd67c7c539d05"
	I0817 02:51:45.110354 1685977 cri.go:76] found id: "78c4325f16b23a8e0c2de1c6bab0242fc4abb3c1f4f067ad4d53cc19f9d8c6d3"
	I0817 02:51:45.110359 1685977 cri.go:76] found id: "8ff2e923d4fececca9e36feac69692fefdc6c915178679880a20c2a1d0956edf"
	I0817 02:51:45.110364 1685977 cri.go:76] found id: "708b84fed7b61b21afa376ef8807e544b39450abc93c611d6f112ac4ff06f48e"
	I0817 02:51:45.110375 1685977 cri.go:76] found id: "7278496269401b57811d1a6760d5898522e77d3d73e46421d9bc1e3dd87be48d"
	I0817 02:51:45.110384 1685977 cri.go:76] found id: "41834b1c0478bcecbd69b2ef8b1d5d654426af1909f04dbc4b219acef4a2ecd0"
	I0817 02:51:45.110388 1685977 cri.go:76] found id: "0daab25e4445c73484cec64d8d72127d3a5420b8b54c70d8a155b6fc50297375"
	I0817 02:51:45.110393 1685977 cri.go:76] found id: ""
	I0817 02:51:45.110399 1685977 cri.go:221] Stopping containers: [f19ee84b7c94fa694cc46aa4f13704d95553f850e7887991adc2814948d63f41 49df9fab49f87f7e2113469f38d8dabd5ae0608b25a7beb6a20fd67c7c539d05 78c4325f16b23a8e0c2de1c6bab0242fc4abb3c1f4f067ad4d53cc19f9d8c6d3 8ff2e923d4fececca9e36feac69692fefdc6c915178679880a20c2a1d0956edf 708b84fed7b61b21afa376ef8807e544b39450abc93c611d6f112ac4ff06f48e 7278496269401b57811d1a6760d5898522e77d3d73e46421d9bc1e3dd87be48d 41834b1c0478bcecbd69b2ef8b1d5d654426af1909f04dbc4b219acef4a2ecd0 0daab25e4445c73484cec64d8d72127d3a5420b8b54c70d8a155b6fc50297375]
	I0817 02:51:45.110444 1685977 ssh_runner.go:149] Run: which crictl
	I0817 02:51:45.113091 1685977 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop f19ee84b7c94fa694cc46aa4f13704d95553f850e7887991adc2814948d63f41 49df9fab49f87f7e2113469f38d8dabd5ae0608b25a7beb6a20fd67c7c539d05 78c4325f16b23a8e0c2de1c6bab0242fc4abb3c1f4f067ad4d53cc19f9d8c6d3 8ff2e923d4fececca9e36feac69692fefdc6c915178679880a20c2a1d0956edf 708b84fed7b61b21afa376ef8807e544b39450abc93c611d6f112ac4ff06f48e 7278496269401b57811d1a6760d5898522e77d3d73e46421d9bc1e3dd87be48d 41834b1c0478bcecbd69b2ef8b1d5d654426af1909f04dbc4b219acef4a2ecd0 0daab25e4445c73484cec64d8d72127d3a5420b8b54c70d8a155b6fc50297375
	I0817 02:51:45.136492 1685977 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 02:51:45.145466 1685977 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 02:51:45.151501 1685977 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 17 02:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 17 02:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2135 Aug 17 02:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 17 02:49 /etc/kubernetes/scheduler.conf
	
	I0817 02:51:45.151546 1685977 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0817 02:51:45.157465 1685977 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0817 02:51:45.163552 1685977 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0817 02:51:45.169169 1685977 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:45.169215 1685977 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 02:51:45.174787 1685977 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0817 02:51:45.180499 1685977 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:45.180542 1685977 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 02:51:45.186204 1685977 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 02:51:45.192206 1685977 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 02:51:45.192224 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:45.534866 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:47.119038 1685977 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.584142233s)
	I0817 02:51:47.119067 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:47.278495 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:47.375691 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:47.451832 1685977 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:51:47.451890 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:47.961867 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:48.461945 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:48.961915 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:49.461719 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:49.962225 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:50.461930 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:50.962264 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:51.462116 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:51.961392 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:52.461331 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:52.962280 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:53.461496 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:53.961570 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:51.169969 1683677 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0817 02:51:54.462008 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:54.961612 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:55.461888 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:55.961367 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:56.461350 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:56.961953 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:56.983198 1685977 api_server.go:70] duration metric: took 9.531366948s to wait for apiserver process to appear ...
	I0817 02:51:56.983216 1685977 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:51:56.983225 1685977 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0817 02:52:01.985180 1685977 api_server.go:255] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 02:52:02.485821 1685977 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0817 02:52:03.291602 1685977 api_server.go:265] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 02:52:03.291631 1685977 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 02:52:03.485767 1685977 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0817 02:52:03.500401 1685977 api_server.go:265] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 02:52:03.500428 1685977 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 02:52:03.985704 1685977 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0817 02:52:03.995630 1685977 api_server.go:265] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 02:52:03.995684 1685977 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 02:52:04.485313 1685977 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0817 02:52:04.493600 1685977 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0817 02:52:04.506676 1685977 api_server.go:139] control plane version: v1.21.3
	I0817 02:52:04.506692 1685977 api_server.go:129] duration metric: took 7.523470676s to wait for apiserver health ...
	I0817 02:52:04.506701 1685977 cni.go:93] Creating CNI manager for ""
	I0817 02:52:04.506709 1685977 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:52:04.508614 1685977 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:52:04.508667 1685977 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:52:04.511864 1685977 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 02:52:04.511881 1685977 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:52:04.524401 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:52:04.815348 1685977 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:52:04.829004 1685977 system_pods.go:59] 9 kube-system pods found
	I0817 02:52:04.829034 1685977 system_pods.go:61] "coredns-558bd4d5db-5nznw" [518c8755-cec2-4876-aa9a-9bd786980d36] Running
	I0817 02:52:04.829040 1685977 system_pods.go:61] "etcd-default-k8s-different-port-20210817024852-1554185" [6a3bb1e7-c1cf-4441-82ac-05ffd6545931] Running
	I0817 02:52:04.829048 1685977 system_pods.go:61] "kindnet-kg587" [58fc2d27-5f85-413f-934b-9000fec9f47a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 02:52:04.829054 1685977 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210817024852-1554185" [b67aed7d-0edc-4cf6-ad11-b449ae991ea1] Running
	I0817 02:52:04.829064 1685977 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" [c71fc635-9e84-43de-b5c2-38167b8cb62a] Running
	I0817 02:52:04.829069 1685977 system_pods.go:61] "kube-proxy-ldzzq" [307f545c-fc74-4292-abb7-77375fc4d06d] Running
	I0817 02:52:04.829078 1685977 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210817024852-1554185" [15448286-e6ef-46f8-9cd8-3c3681b4b794] Running
	I0817 02:52:04.829084 1685977 system_pods.go:61] "metrics-server-7c784ccb57-cr7nj" [b25c2ae0-c8e5-4b37-972d-2caac1a37d0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 02:52:04.829096 1685977 system_pods.go:61] "storage-provisioner" [6fbd6fee-aa3d-47cb-9b75-2b7d3a25ba1e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 02:52:04.829102 1685977 system_pods.go:74] duration metric: took 13.739007ms to wait for pod list to return data ...
	I0817 02:52:04.829113 1685977 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:52:04.836158 1685977 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:52:04.836182 1685977 node_conditions.go:123] node cpu capacity is 2
	I0817 02:52:04.836193 1685977 node_conditions.go:105] duration metric: took 7.076434ms to run NodePressure ...
	I0817 02:52:04.836206 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:52:05.092298 1685977 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0817 02:52:05.098183 1685977 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0817 02:52:05.463335 1685977 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0817 02:52:05.904251 1685977 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0817 02:52:06.435734 1685977 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0817 02:52:07.220388 1685977 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0817 02:52:08.726608 1685977 kubeadm.go:746] kubelet initialised
	I0817 02:52:08.726629 1685977 kubeadm.go:747] duration metric: took 3.634311049s waiting for restarted kubelet to initialise ...
	I0817 02:52:08.726635 1685977 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:52:08.731053 1685977 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-5nznw" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:10.112475 1683677 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0817 02:52:10.746370 1685977 pod_ready.go:102] pod "coredns-558bd4d5db-5nznw" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:11.750027 1685977 pod_ready.go:92] pod "coredns-558bd4d5db-5nznw" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:11.750059 1685977 pod_ready.go:81] duration metric: took 3.018980591s waiting for pod "coredns-558bd4d5db-5nznw" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:11.750069 1685977 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.348375 1685977 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:13.348400 1685977 pod_ready.go:81] duration metric: took 1.598321976s waiting for pod "etcd-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.348413 1685977 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.352181 1685977 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:13.352201 1685977 pod_ready.go:81] duration metric: took 3.780558ms waiting for pod "kube-apiserver-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.352211 1685977 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.355850 1685977 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:13.355867 1685977 pod_ready.go:81] duration metric: took 3.648694ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.355876 1685977 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ldzzq" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.359651 1685977 pod_ready.go:92] pod "kube-proxy-ldzzq" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:13.359669 1685977 pod_ready.go:81] duration metric: took 3.785818ms waiting for pod "kube-proxy-ldzzq" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.359679 1685977 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:14.368765 1685977 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:14.368790 1685977 pod_ready.go:81] duration metric: took 1.009102089s waiting for pod "kube-scheduler-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:14.368800 1685977 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:16.548731 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:18.548909 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:21.049710 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:23.548889 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:25.549067 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:28.049511 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:25.563354 1683677 kubeadm.go:746] kubelet initialised
	I0817 02:52:25.563374 1683677 kubeadm.go:747] duration metric: took 58.435709937s waiting for restarted kubelet to initialise ...
	I0817 02:52:25.563381 1683677 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:52:25.568074 1683677 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-jp8m9" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.577170 1683677 pod_ready.go:92] pod "coredns-fb8b8dccf-jp8m9" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.577194 1683677 pod_ready.go:81] duration metric: took 9.090528ms waiting for pod "coredns-fb8b8dccf-jp8m9" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.577204 1683677 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-mcnc6" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.580884 1683677 pod_ready.go:92] pod "coredns-fb8b8dccf-mcnc6" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.580901 1683677 pod_ready.go:81] duration metric: took 3.691246ms waiting for pod "coredns-fb8b8dccf-mcnc6" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.580908 1683677 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.584951 1683677 pod_ready.go:92] pod "etcd-old-k8s-version-20210817024805-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.584969 1683677 pod_ready.go:81] duration metric: took 4.05418ms waiting for pod "etcd-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.584979 1683677 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.588894 1683677 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210817024805-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.588913 1683677 pod_ready.go:81] duration metric: took 3.925311ms waiting for pod "kube-apiserver-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.588922 1683677 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.961580 1683677 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210817024805-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.961598 1683677 pod_ready.go:81] duration metric: took 372.668062ms waiting for pod "kube-controller-manager-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.961609 1683677 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nhh5q" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:26.362231 1683677 pod_ready.go:92] pod "kube-proxy-nhh5q" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:26.362251 1683677 pod_ready.go:81] duration metric: took 400.63554ms waiting for pod "kube-proxy-nhh5q" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:26.362261 1683677 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:26.761518 1683677 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210817024805-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:26.761538 1683677 pod_ready.go:81] duration metric: took 399.268628ms waiting for pod "kube-scheduler-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:26.761549 1683677 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:29.166693 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:30.549224 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:33.049782 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:31.166878 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:33.166960 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:35.549390 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:38.050107 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:35.667458 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:38.167056 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:40.167129 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:40.548895 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:43.050133 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:42.665923 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:44.674856 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:45.548666 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:47.548790 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:47.166577 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:49.167004 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:50.049426 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:52.548979 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:51.666930 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:54.166024 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:55.048240 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:57.048958 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:59.049140 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:56.167473 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:58.667062 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:01.049548 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:03.054245 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:01.166392 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:03.166660 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:05.549727 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:08.049176 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:05.667248 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:08.167175 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:10.167277 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:10.050022 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:12.549231 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:12.666646 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:14.667107 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:15.049207 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:17.049576 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:19.054040 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:17.166790 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:19.666869 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:21.548395 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:24.049554 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:21.666936 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:23.667266 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:26.549088 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:29.049801 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:25.667896 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:28.166786 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:30.166991 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:31.050272 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:33.548283 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:32.666672 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:34.674735 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:35.548389 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:37.549130 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:37.166927 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:39.667330 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:40.048932 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:42.050111 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:42.167056 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:44.666646 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:44.548604 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:46.549481 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:49.048921 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:46.667864 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:49.166934 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:51.548413 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:54.050582 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:51.667345 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:54.166496 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:56.549207 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:59.049117 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:56.167389 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:58.666838 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:01.548661 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:03.548776 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:00.666895 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:03.166942 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:05.549284 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:08.049674 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:05.666841 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:08.166343 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:10.167428 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:10.548566 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:12.548793 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:12.666920 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:15.167557 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:15.048904 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:17.548621 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:17.667314 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:20.167520 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:19.548686 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:21.549417 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:24.049484 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:22.667134 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:25.167010 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:26.548211 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:28.548976 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:27.167117 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:29.666570 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:30.549059 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:33.049494 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:32.166673 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:34.167205 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:35.070574 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:37.548051 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:36.167278 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:38.666736 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:39.548395 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:42.049591 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:41.166234 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:43.167351 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:44.547941 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:46.548909 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:49.049493 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:45.666902 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:48.166473 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:50.167180 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:51.548264 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:53.548677 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:52.666625 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:55.166893 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:56.048889 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:58.051782 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:57.167206 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:59.667203 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:00.548600 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:03.049154 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:02.166246 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:04.166904 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:05.548239 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:07.549020 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:06.167605 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:08.666362 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:10.049433 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:12.052731 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:10.666627 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:13.166461 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:14.549237 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:16.549496 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:19.049161 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:15.666987 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:17.667555 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:20.167216 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:21.548251 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:24.048643 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:22.666194 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:24.670235 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:26.048790 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:28.548918 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:27.166353 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:29.166945 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:31.049262 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:33.548201 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:31.666651 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:34.166901 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:35.552149 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:38.048626 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:36.666743 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:39.166482 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:40.050022 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:42.052127 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:41.167158 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:43.176080 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:44.550109 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:47.050193 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:45.666970 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:48.167040 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:49.548516 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:52.048827 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:50.667197 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:53.167284 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:54.549177 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:57.048920 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:55.666563 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:58.165868 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:00.166547 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:59.548365 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:02.048722 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:04.049389 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:02.169578 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:04.666593 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:06.049778 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:08.548472 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:06.666772 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:08.668718 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:10.548795 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:13.048705 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:11.167180 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:13.666474 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:14.545138 1685977 pod_ready.go:81] duration metric: took 4m0.176323682s waiting for pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace to be "Ready" ...
	E0817 02:56:14.545160 1685977 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0817 02:56:14.545176 1685977 pod_ready.go:38] duration metric: took 4m5.818529856s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:56:14.545204 1685977 kubeadm.go:604] restartCluster took 4m32.507951116s
	W0817 02:56:14.545314 1685977 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 02:56:14.545348 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0817 02:56:16.599387 1685977 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.054018506s)
	I0817 02:56:16.599456 1685977 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0817 02:56:16.608972 1685977 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 02:56:16.609032 1685977 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:56:16.631456 1685977 cri.go:76] found id: ""
	I0817 02:56:16.631506 1685977 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 02:56:16.639516 1685977 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 02:56:16.639567 1685977 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 02:56:16.645552 1685977 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 02:56:16.645586 1685977 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 02:56:16.972170 1685977 out.go:204]   - Generating certificates and keys ...
	I0817 02:56:18.696663 1685977 out.go:204]   - Booting up control plane ...
	I0817 02:56:15.667122 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:17.667179 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:19.667252 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:22.166690 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:24.167040 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:26.666042 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:27.162685 1683677 pod_ready.go:81] duration metric: took 4m0.40112198s waiting for pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace to be "Ready" ...
	E0817 02:56:27.162707 1683677 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0817 02:56:27.162724 1683677 pod_ready.go:38] duration metric: took 4m1.599333201s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:56:27.162750 1683677 kubeadm.go:604] restartCluster took 5m19.132650156s
	W0817 02:56:27.162885 1683677 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 02:56:27.162914 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0817 02:56:29.771314 1683677 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.608376314s)
	I0817 02:56:29.771371 1683677 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0817 02:56:29.783800 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 02:56:29.783862 1683677 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:56:29.828158 1683677 cri.go:76] found id: ""
	I0817 02:56:29.828206 1683677 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 02:56:29.841550 1683677 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 02:56:29.841592 1683677 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 02:56:29.851739 1683677 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 02:56:29.851771 1683677 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 02:56:30.528257 1683677 out.go:204]   - Generating certificates and keys ...
	I0817 02:56:34.213997 1683677 out.go:204]   - Booting up control plane ...
	I0817 02:56:40.265745 1685977 out.go:204]   - Configuring RBAC rules ...
	I0817 02:56:40.719797 1685977 cni.go:93] Creating CNI manager for ""
	I0817 02:56:40.719816 1685977 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:56:40.721573 1685977 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:56:40.721629 1685977 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:56:40.725046 1685977 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 02:56:40.725060 1685977 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:56:40.745867 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:56:41.043942 1685977 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:56:41.044090 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:41.044170 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=default-k8s-different-port-20210817024852-1554185 minikube.k8s.io/updated_at=2021_08_17T02_56_41_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:41.065176 1685977 ops.go:34] apiserver oom_adj: -16
	I0817 02:56:41.177959 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:41.757346 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:42.256794 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:42.756791 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:43.257487 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:43.757647 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:44.256760 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:44.757032 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:45.257420 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:45.757557 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:46.256819 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:46.757492 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:47.257361 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:47.756853 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:48.257270 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:48.756824 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:49.257196 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:49.756775 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:50.257140 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:50.757021 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:51.257491 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:51.757426 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:52.257111 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:52.756899 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:53.257803 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:53.756814 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:54.257291 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:54.757459 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:55.257423 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:55.349810 1685977 kubeadm.go:985] duration metric: took 14.305777046s to wait for elevateKubeSystemPrivileges.
	I0817 02:56:55.349836 1685977 kubeadm.go:392] StartCluster complete in 5m13.356990711s
	I0817 02:56:55.349852 1685977 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:56:55.349932 1685977 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:56:55.350990 1685977 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:56:55.873850 1685977 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210817024852-1554185" rescaled to 1
	I0817 02:56:55.873900 1685977 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 02:56:55.875535 1685977 out.go:177] * Verifying Kubernetes components...
	I0817 02:56:55.873948 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:56:55.875608 1685977 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:56:55.874174 1685977 config.go:177] Loaded profile config "default-k8s-different-port-20210817024852-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:56:55.874189 1685977 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0817 02:56:55.875761 1685977 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210817024852-1554185"
	I0817 02:56:55.875784 1685977 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210817024852-1554185"
	I0817 02:56:55.875807 1685977 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210817024852-1554185"
	I0817 02:56:55.875852 1685977 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210817024852-1554185"
	I0817 02:56:55.875874 1685977 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210817024852-1554185"
	W0817 02:56:55.875892 1685977 addons.go:147] addon dashboard should already be in state true
	I0817 02:56:55.875924 1685977 host.go:66] Checking if "default-k8s-different-port-20210817024852-1554185" exists ...
	I0817 02:56:55.876181 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:56:55.876436 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:56:55.875789 1685977 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210817024852-1554185"
	W0817 02:56:55.876568 1685977 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:56:55.876587 1685977 host.go:66] Checking if "default-k8s-different-port-20210817024852-1554185" exists ...
	I0817 02:56:55.876999 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:56:55.875810 1685977 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210817024852-1554185"
	I0817 02:56:55.877052 1685977 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210817024852-1554185"
	W0817 02:56:55.877058 1685977 addons.go:147] addon metrics-server should already be in state true
	I0817 02:56:55.877072 1685977 host.go:66] Checking if "default-k8s-different-port-20210817024852-1554185" exists ...
	I0817 02:56:55.877456 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:56:55.995544 1685977 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210817024852-1554185"
	W0817 02:56:55.995564 1685977 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:56:55.995586 1685977 host.go:66] Checking if "default-k8s-different-port-20210817024852-1554185" exists ...
	I0817 02:56:55.996023 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:56:56.015954 1685977 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0817 02:56:56.016016 1685977 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 02:56:56.016025 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 02:56:56.016078 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:56:56.019888 1685977 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0817 02:56:56.022681 1685977 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0817 02:56:56.022730 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0817 02:56:56.022738 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0817 02:56:56.022784 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:56:56.059397 1685977 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:56:56.059492 1685977 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:56:56.059501 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:56:56.059554 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:56:56.082725 1685977 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210817024852-1554185" to be "Ready" ...
	I0817 02:56:56.083763 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 02:56:56.084745 1685977 node_ready.go:49] node "default-k8s-different-port-20210817024852-1554185" has status "Ready":"True"
	I0817 02:56:56.084758 1685977 node_ready.go:38] duration metric: took 2.009433ms waiting for node "default-k8s-different-port-20210817024852-1554185" to be "Ready" ...
	I0817 02:56:56.084769 1685977 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:56:56.096809 1685977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-8rfj4" in "kube-system" namespace to be "Ready" ...
	I0817 02:56:56.149127 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:56:56.191058 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:56:56.191637 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:56:56.192709 1685977 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:56:56.192724 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:56:56.192771 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:56:56.248139 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:56:56.330588 1685977 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:56:56.362001 1685977 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:56:56.538907 1685977 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 02:56:56.538927 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0817 02:56:56.545738 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0817 02:56:56.545757 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0817 02:56:56.644902 1685977 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 02:56:56.644922 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 02:56:56.663120 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0817 02:56:56.663138 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0817 02:56:56.809832 1685977 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 02:56:56.809854 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 02:56:56.891456 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0817 02:56:56.891478 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0817 02:56:57.116106 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0817 02:56:57.116129 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0817 02:56:57.141884 1685977 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 02:56:57.165409 1685977 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.08162096s)
	I0817 02:56:57.165436 1685977 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0817 02:56:57.213794 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0817 02:56:57.213816 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0817 02:56:57.316053 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0817 02:56:57.316075 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0817 02:56:57.434942 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0817 02:56:57.434964 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0817 02:56:57.511781 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0817 02:56:57.511802 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0817 02:56:57.621303 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 02:56:57.621326 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0817 02:56:57.627698 1685977 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.297055547s)
	I0817 02:56:57.627784 1685977 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.265762885s)
	I0817 02:56:57.653417 1685977 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 02:56:58.257285 1685977 pod_ready.go:102] pod "coredns-558bd4d5db-8rfj4" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:58.515929 1685977 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.374011036s)
	I0817 02:56:58.515960 1685977 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210817024852-1554185"
	I0817 02:56:59.006667 1685977 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.353175904s)
	I0817 02:56:59.008617 1685977 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0817 02:56:59.008637 1685977 addons.go:344] enableAddons completed in 3.134454092s
	I0817 02:56:59.619976 1685977 pod_ready.go:92] pod "coredns-558bd4d5db-8rfj4" in "kube-system" namespace has status "Ready":"True"
	I0817 02:56:59.620007 1685977 pod_ready.go:81] duration metric: took 3.52314652s waiting for pod "coredns-558bd4d5db-8rfj4" in "kube-system" namespace to be "Ready" ...
	I0817 02:56:59.620032 1685977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-wtlfm" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:01.635272 1685977 pod_ready.go:102] pod "coredns-558bd4d5db-wtlfm" in "kube-system" namespace has status "Ready":"False"
	I0817 02:57:03.626539 1685977 pod_ready.go:97] error getting pod "coredns-558bd4d5db-wtlfm" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-wtlfm" not found
	I0817 02:57:03.626566 1685977 pod_ready.go:81] duration metric: took 4.006519971s waiting for pod "coredns-558bd4d5db-wtlfm" in "kube-system" namespace to be "Ready" ...
	E0817 02:57:03.626577 1685977 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-wtlfm" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-wtlfm" not found
	I0817 02:57:03.626585 1685977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.637703 1685977 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:57:03.637721 1685977 pod_ready.go:81] duration metric: took 11.127045ms waiting for pod "etcd-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.637735 1685977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.642094 1685977 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:57:03.642111 1685977 pod_ready.go:81] duration metric: took 4.368458ms waiting for pod "kube-apiserver-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.642121 1685977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.647863 1685977 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:57:03.647880 1685977 pod_ready.go:81] duration metric: took 5.750638ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.647891 1685977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5mnnj" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.653034 1685977 pod_ready.go:92] pod "kube-proxy-5mnnj" in "kube-system" namespace has status "Ready":"True"
	I0817 02:57:03.653050 1685977 pod_ready.go:81] duration metric: took 5.152114ms waiting for pod "kube-proxy-5mnnj" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.653058 1685977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.827111 1685977 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:57:03.827170 1685977 pod_ready.go:81] duration metric: took 174.102098ms waiting for pod "kube-scheduler-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.827191 1685977 pod_ready.go:38] duration metric: took 7.7424112s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:57:03.827217 1685977 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:57:03.827284 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:57:03.850493 1685977 api_server.go:70] duration metric: took 7.976564897s to wait for apiserver process to appear ...
	I0817 02:57:03.850552 1685977 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:57:03.850574 1685977 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0817 02:57:03.859078 1685977 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0817 02:57:03.859754 1685977 api_server.go:139] control plane version: v1.21.3
	I0817 02:57:03.859770 1685977 api_server.go:129] duration metric: took 9.200861ms to wait for apiserver health ...
	I0817 02:57:03.859776 1685977 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:57:04.030424 1685977 system_pods.go:59] 9 kube-system pods found
	I0817 02:57:04.030522 1685977 system_pods.go:61] "coredns-558bd4d5db-8rfj4" [c0122f9e-6b2a-4ee2-ae8a-3985bbc5160d] Running
	I0817 02:57:04.030541 1685977 system_pods.go:61] "etcd-default-k8s-different-port-20210817024852-1554185" [15ccf197-37be-47d8-9a87-49c904ab5e74] Running
	I0817 02:57:04.030559 1685977 system_pods.go:61] "kindnet-jvbx9" [c3ef7b0d-aa4d-431f-85c3-eec88c3223bc] Running
	I0817 02:57:04.030576 1685977 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210817024852-1554185" [242d416f-9292-4cbd-b182-8aafe6cec200] Running
	I0817 02:57:04.030610 1685977 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" [aeab199a-072d-4b0b-bf31-d78123bb018f] Running
	I0817 02:57:04.030628 1685977 system_pods.go:61] "kube-proxy-5mnnj" [6b672c7a-ea5e-4ef4-932c-95a01336037e] Running
	I0817 02:57:04.030645 1685977 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210817024852-1554185" [4fdad56d-4f2b-4cc1-8708-def6cb1f3602] Running
	I0817 02:57:04.030664 1685977 system_pods.go:61] "metrics-server-7c784ccb57-67mmz" [d68b6163-f479-44ce-b297-206cc3375f8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 02:57:04.030690 1685977 system_pods.go:61] "storage-provisioner" [b4483bc5-0558-4d83-96e9-b61e6cb235ae] Running
	I0817 02:57:04.030714 1685977 system_pods.go:74] duration metric: took 170.930413ms to wait for pod list to return data ...
	I0817 02:57:04.030731 1685977 default_sa.go:34] waiting for default service account to be created ...
	I0817 02:57:04.227758 1685977 default_sa.go:45] found service account: "default"
	I0817 02:57:04.227782 1685977 default_sa.go:55] duration metric: took 197.046322ms for default service account to be created ...
	I0817 02:57:04.227790 1685977 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 02:57:04.430644 1685977 system_pods.go:86] 9 kube-system pods found
	I0817 02:57:04.430674 1685977 system_pods.go:89] "coredns-558bd4d5db-8rfj4" [c0122f9e-6b2a-4ee2-ae8a-3985bbc5160d] Running
	I0817 02:57:04.430681 1685977 system_pods.go:89] "etcd-default-k8s-different-port-20210817024852-1554185" [15ccf197-37be-47d8-9a87-49c904ab5e74] Running
	I0817 02:57:04.430687 1685977 system_pods.go:89] "kindnet-jvbx9" [c3ef7b0d-aa4d-431f-85c3-eec88c3223bc] Running
	I0817 02:57:04.430693 1685977 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210817024852-1554185" [242d416f-9292-4cbd-b182-8aafe6cec200] Running
	I0817 02:57:04.430698 1685977 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" [aeab199a-072d-4b0b-bf31-d78123bb018f] Running
	I0817 02:57:04.430703 1685977 system_pods.go:89] "kube-proxy-5mnnj" [6b672c7a-ea5e-4ef4-932c-95a01336037e] Running
	I0817 02:57:04.430709 1685977 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210817024852-1554185" [4fdad56d-4f2b-4cc1-8708-def6cb1f3602] Running
	I0817 02:57:04.430723 1685977 system_pods.go:89] "metrics-server-7c784ccb57-67mmz" [d68b6163-f479-44ce-b297-206cc3375f8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 02:57:04.430734 1685977 system_pods.go:89] "storage-provisioner" [b4483bc5-0558-4d83-96e9-b61e6cb235ae] Running
	I0817 02:57:04.430741 1685977 system_pods.go:126] duration metric: took 202.94681ms to wait for k8s-apps to be running ...
	I0817 02:57:04.430752 1685977 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 02:57:04.430797 1685977 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:57:04.440008 1685977 system_svc.go:56] duration metric: took 9.251797ms WaitForService to wait for kubelet.
	I0817 02:57:04.440053 1685977 kubeadm.go:547] duration metric: took 8.566127551s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 02:57:04.440080 1685977 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:57:04.633083 1685977 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:57:04.633108 1685977 node_conditions.go:123] node cpu capacity is 2
	I0817 02:57:04.633120 1685977 node_conditions.go:105] duration metric: took 193.033817ms to run NodePressure ...
	I0817 02:57:04.633130 1685977 start.go:231] waiting for startup goroutines ...
	I0817 02:57:04.716921 1685977 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 02:57:04.719057 1685977 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210817024852-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	49c1c104eec15       523cad1a4df73       27 seconds ago       Exited              dashboard-metrics-scraper   1                   ab391ac8df2d6
	c59361d49c6ac       85e6c0cff043f       32 seconds ago       Running             kubernetes-dashboard        0                   208e67c0b68d2
	69ead03de4d0c       ba04bb24b9575       33 seconds ago       Exited              storage-provisioner         0                   0506d9abb91c5
	99869cae05700       4ea38350a1beb       35 seconds ago       Running             kube-proxy                  0                   07d92f4977907
	9c7e6513031e2       f37b7c809e5dc       35 seconds ago       Running             kindnet-cni                 0                   9682c574efcad
	1bdff89355457       1a1f05a2cd7c2       35 seconds ago       Running             coredns                     0                   c4296c2c68eef
	2a39eb75e1b7a       05b738aa1bc63       About a minute ago   Running             etcd                        0                   d9d56919ab979
	18cab4bfea9e2       31a3b96cefc1e       About a minute ago   Running             kube-scheduler              0                   a279c06813b2d
	ac51e68317b3c       cb310ff289d79       About a minute ago   Running             kube-controller-manager     0                   4275fafdcad20
	8881511682909       44a6d50ef170d       About a minute ago   Running             kube-apiserver              0                   4a4578b9de7d2
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 02:51:25 UTC, end at Tue 2021-08-17 02:57:32 UTC. --
	Aug 17 02:57:04 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:04.546054197Z" level=info msg="Finish piping stdout of container \"266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2\""
	Aug 17 02:57:04 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:04.546079690Z" level=info msg="Finish piping stderr of container \"266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2\""
	Aug 17 02:57:04 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:04.550923064Z" level=info msg="StartContainer for \"266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2\" returns successfully"
	Aug 17 02:57:04 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:04.551059645Z" level=info msg="TaskExit event &TaskExit{ContainerID:266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2,ID:266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2,Pid:5938,ExitStatus:1,ExitedAt:2021-08-17 02:57:04.548271349 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 02:57:04 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:04.583433307Z" level=info msg="shim disconnected" id=266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2
	Aug 17 02:57:04 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:04.583588497Z" level=error msg="copy shim log" error="read /proc/self/fd/145: file already closed"
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.241580825Z" level=info msg="CreateContainer within sandbox \"ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.261856177Z" level=info msg="CreateContainer within sandbox \"ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37\""
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.262940072Z" level=info msg="StartContainer for \"49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37\""
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.343337587Z" level=info msg="Finish piping stderr of container \"49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37\""
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.343515316Z" level=info msg="Finish piping stdout of container \"49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37\""
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.346962585Z" level=info msg="StartContainer for \"49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37\" returns successfully"
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.347049050Z" level=info msg="TaskExit event &TaskExit{ContainerID:49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37,ID:49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37,Pid:6021,ExitStatus:1,ExitedAt:2021-08-17 02:57:05.344677561 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.373819181Z" level=info msg="shim disconnected" id=49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.373984242Z" level=error msg="copy shim log" error="read /proc/self/fd/145: file already closed"
	Aug 17 02:57:06 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:06.237102664Z" level=info msg="RemoveContainer for \"266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2\""
	Aug 17 02:57:06 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:06.242416638Z" level=info msg="RemoveContainer for \"266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2\" returns successfully"
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:15.136595750Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:15.140555448Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:15.141900034Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Aug 17 02:57:27 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:27.692806698Z" level=info msg="Finish piping stderr of container \"69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8\""
	Aug 17 02:57:27 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:27.692949786Z" level=info msg="Finish piping stdout of container \"69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8\""
	Aug 17 02:57:27 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:27.694784107Z" level=info msg="TaskExit event &TaskExit{ContainerID:69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8,ID:69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8,Pid:5603,ExitStatus:255,ExitedAt:2021-08-17 02:57:27.694351036 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 02:57:27 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:27.722785241Z" level=info msg="shim disconnected" id=69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8
	Aug 17 02:57:27 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:27.723005916Z" level=error msg="copy shim log" error="read /proc/self/fd/116: file already closed"
	
	* 
	* ==> coredns [1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> etcd [2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45] <==
	* raft2021/08/17 02:56:28 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 02:56:28.959764 W | auth: simple token is not cryptographically signed
	2021-08-17 02:56:28.979366 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-17 02:56:28.986984 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/17 02:56:28 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 02:56:28.987295 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-17 02:56:28.990867 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 02:56:28.990996 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-17 02:56:28.991084 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/17 02:56:29 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 02:56:29 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 02:56:29 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 02:56:29 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 02:56:29 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 02:56:29.754369 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 02:56:29.759447 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 02:56:29.759499 I | etcdserver: published {Name:default-k8s-different-port-20210817024852-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 02:56:29.759509 I | embed: ready to serve client requests
	2021-08-17 02:56:29.763696 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 02:56:29.763803 I | embed: ready to serve client requests
	2021-08-17 02:56:29.765803 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 02:56:29.819959 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 02:56:52.218658 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:56:57.675064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:57:07.670759 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  02:57:47 up 10:40,  0 users,  load average: 2.44, 1.74, 1.63
	Linux default-k8s-different-port-20210817024852-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173] <==
	* I0817 02:57:38.717359       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0817 02:57:38.717368       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0817 02:57:40.401933       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0817 02:57:40.782710       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0817 02:57:40.790735       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0817 02:57:41.818633       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0817 02:57:41.877486       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0817 02:57:41.899220       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0817 02:57:42.080921       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0817 02:57:42.761633       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0817 02:57:42.865109       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0817 02:57:42.891432       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0817 02:57:44.591404       1 client.go:360] parsed scheme: "passthrough"
	I0817 02:57:44.591442       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 02:57:44.591450       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 02:57:47.611348       1 trace.go:205] Trace[2092255498]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (17-Aug-2021 02:57:32.738) (total time: 14872ms):
	Trace[2092255498]: [14.872406697s] [14.872406697s] END
	E0817 02:57:47.611449       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0817 02:57:47.611714       1 trace.go:205] Trace[1327991365]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/arm64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (17-Aug-2021 02:57:32.738) (total time: 14872ms):
	Trace[1327991365]: [14.872790865s] [14.872790865s] END
	I0817 02:57:47.612078       1 trace.go:205] Trace[2129023585]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (17-Aug-2021 02:57:17.641) (total time: 29971ms):
	Trace[2129023585]: [29.97105521s] [29.97105521s] END
	E0817 02:57:47.612174       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0817 02:57:47.613452       1 trace.go:205] Trace[1415691973]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/arm64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (17-Aug-2021 02:57:17.640) (total time: 29972ms):
	Trace[1415691973]: [29.972443602s] [29.972443602s] END
	
	* 
	* ==> kube-controller-manager [ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a] <==
	* I0817 02:56:58.612446       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0817 02:56:58.650640       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 02:56:58.660467       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 02:56:58.666523       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0817 02:56:58.683025       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 02:56:58.683445       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 02:56:58.686218       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 02:56:58.691392       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 02:56:58.698676       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 02:56:58.699092       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 02:56:58.707169       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 02:56:58.707531       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 02:56:58.715412       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 02:56:58.715741       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 02:56:58.715883       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 02:56:58.716005       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 02:56:58.729273       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 02:56:58.729396       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 02:56:58.736111       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 02:56:58.736947       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 02:56:58.797910       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-h5wgx"
	I0817 02:56:58.849622       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-twxcq"
	I0817 02:56:59.080534       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0817 02:57:24.403599       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 02:57:24.941857       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b] <==
	* I0817 02:56:57.457781       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 02:56:57.457830       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 02:56:57.457854       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 02:56:57.506259       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 02:56:57.506290       1 server_others.go:212] Using iptables Proxier.
	I0817 02:56:57.512218       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 02:56:57.512251       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 02:56:57.512520       1 server.go:643] Version: v1.21.3
	I0817 02:56:57.521149       1 config.go:315] Starting service config controller
	I0817 02:56:57.521165       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 02:56:57.521189       1 config.go:224] Starting endpoint slice config controller
	I0817 02:56:57.521192       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 02:56:57.539450       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 02:56:57.543041       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 02:56:57.622884       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 02:56:57.622946       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b] <==
	* W0817 02:56:37.860034       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 02:56:37.980720       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0817 02:56:37.981365       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 02:56:37.981384       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 02:56:37.981406       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 02:56:37.991062       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:56:37.997958       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 02:56:37.998053       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 02:56:37.998115       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 02:56:37.998348       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:56:37.998424       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:56:37.998491       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:56:37.998558       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:56:37.998623       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 02:56:37.998689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 02:56:37.998738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:56:38.000360       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:56:38.000486       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 02:56:38.002109       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:56:38.840276       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:56:38.903980       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 02:56:38.912303       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 02:56:38.972949       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 02:56:39.012370       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0817 02:56:39.384997       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 02:51:25 UTC, end at Tue 2021-08-17 02:57:47 UTC. --
	Aug 17 02:56:59 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:56:59.225876    4621 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 02:56:59 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:56:59.226009    4621 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 02:56:59 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:56:59.226457    4621 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d7z6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Pro
be{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,
VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-67mmz_kube-system(d68b6163-f479-44ce-b297-206cc3375f8f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
	Aug 17 02:56:59 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:56:59.226625    4621 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-7c784ccb57-67mmz" podUID=d68b6163-f479-44ce-b297-206cc3375f8f
	Aug 17 02:56:59 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:56:59.425842    4621 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podb4483bc5-0558-4d83-96e9-b61e6cb235ae\": RecentStats: unable to find data in memory cache]"
	Aug 17 02:57:00 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:00.210696    4621 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-67mmz" podUID=d68b6163-f479-44ce-b297-206cc3375f8f
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: I0817 02:57:05.220323    4621 scope.go:111] "RemoveContainer" containerID="266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2"
	Aug 17 02:57:06 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: W0817 02:57:06.008472    4621 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod1314b7d4-1f3d-489b-81c9-9e21210da53e/266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2 WatchSource:0}: task 266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2 not found: not found
	Aug 17 02:57:06 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: I0817 02:57:06.223330    4621 scope.go:111] "RemoveContainer" containerID="266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2"
	Aug 17 02:57:06 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: I0817 02:57:06.223640    4621 scope.go:111] "RemoveContainer" containerID="49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37"
	Aug 17 02:57:06 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:06.223943    4621 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-twxcq_kubernetes-dashboard(1314b7d4-1f3d-489b-81c9-9e21210da53e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-twxcq" podUID=1314b7d4-1f3d-489b-81c9-9e21210da53e
	Aug 17 02:57:07 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: I0817 02:57:07.226304    4621 scope.go:111] "RemoveContainer" containerID="49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37"
	Aug 17 02:57:07 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:07.226576    4621 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-twxcq_kubernetes-dashboard(1314b7d4-1f3d-489b-81c9-9e21210da53e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-twxcq" podUID=1314b7d4-1f3d-489b-81c9-9e21210da53e
	Aug 17 02:57:07 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: W0817 02:57:07.513355    4621 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod1314b7d4-1f3d-489b-81c9-9e21210da53e/49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37 WatchSource:0}: task 49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37 not found: not found
	Aug 17 02:57:09 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:09.500390    4621 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podb4483bc5-0558-4d83-96e9-b61e6cb235ae\": RecentStats: unable to find data in memory cache]"
	Aug 17 02:57:12 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: I0817 02:57:12.123484    4621 scope.go:111] "RemoveContainer" containerID="49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37"
	Aug 17 02:57:12 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:12.123808    4621 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-twxcq_kubernetes-dashboard(1314b7d4-1f3d-489b-81c9-9e21210da53e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-twxcq" podUID=1314b7d4-1f3d-489b-81c9-9e21210da53e
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:15.142072    4621 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:15.142113    4621 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:15.142211    4621 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d7z6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Pro
be{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,
VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-67mmz_kube-system(d68b6163-f479-44ce-b297-206cc3375f8f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:15.142253    4621 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-7c784ccb57-67mmz" podUID=d68b6163-f479-44ce-b297-206cc3375f8f
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: I0817 02:57:15.917743    4621 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 systemd[1]: kubelet.service: Succeeded.
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18] <==
	* 2021/08/17 02:57:00 Using namespace: kubernetes-dashboard
	2021/08/17 02:57:00 Using in-cluster config to connect to apiserver
	2021/08/17 02:57:00 Using secret token for csrf signing
	2021/08/17 02:57:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/17 02:57:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/17 02:57:00 Successful initial request to the apiserver, version: v1.21.3
	2021/08/17 02:57:00 Generating JWE encryption key
	2021/08/17 02:57:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/17 02:57:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/17 02:57:00 Initializing JWE encryption key from synchronized object
	2021/08/17 02:57:00 Creating in-cluster Sidecar client
	2021/08/17 02:57:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/17 02:57:00 Serving insecurely on HTTP port: 9090
	2021/08/17 02:57:00 Starting overwatch
	
	* 
	* ==> storage-provisioner [69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8] <==
	* 	/usr/local/go/src/sync/cond.go:56 +0xb8
	k8s.io/client-go/util/workqueue.(*Type).Get(0x400047d680, 0x0, 0x0, 0x1c200)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x84
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0x40000df680, 0x1298cd0, 0x40002faa80, 0x40000e6d20)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x34
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x54
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x40001e6d20)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x64
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40001e6d20, 0x1267368, 0x40001ecd80, 0x1, 0x40000e6360)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x74
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40001e6d20, 0x3b9aca00, 0x0, 0x1, 0x40000e6360)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x88
	k8s.io/apimachinery/pkg/util/wait.Until(0x40001e6d20, 0x3b9aca00, 0x40000e6360)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x48
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x308
	
	goroutine 108 [runnable]:
	k8s.io/client-go/tools/record.(*recorderImpl).generateEvent.func1(0x40000d9d00, 0x40003a0280)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341
	created by k8s.io/client-go/tools/record.(*recorderImpl).generateEvent
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341 +0x31c
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 02:57:47.627095 1698052 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = transport is closing
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = transport is closing\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20210817024852-1554185
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20210817024852-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "014007154d015ac8e8091bdc467522abde7243f660feaef4ce6e550a017a3a15",
	        "Created": "2021-08-17T02:48:53.903905113Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1686164,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T02:51:25.232423109Z",
	            "FinishedAt": "2021-08-17T02:51:23.69333619Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/014007154d015ac8e8091bdc467522abde7243f660feaef4ce6e550a017a3a15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/014007154d015ac8e8091bdc467522abde7243f660feaef4ce6e550a017a3a15/hostname",
	        "HostsPath": "/var/lib/docker/containers/014007154d015ac8e8091bdc467522abde7243f660feaef4ce6e550a017a3a15/hosts",
	        "LogPath": "/var/lib/docker/containers/014007154d015ac8e8091bdc467522abde7243f660feaef4ce6e550a017a3a15/014007154d015ac8e8091bdc467522abde7243f660feaef4ce6e550a017a3a15-json.log",
	        "Name": "/default-k8s-different-port-20210817024852-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20210817024852-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20210817024852-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/eb8af045cee76116725a046b85b3b0ae49569752ba414ad4365051160f03a64c-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb8af045cee76116725a046b85b3b0ae49569752ba414ad4365051160f03a64c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb8af045cee76116725a046b85b3b0ae49569752ba414ad4365051160f03a64c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb8af045cee76116725a046b85b3b0ae49569752ba414ad4365051160f03a64c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210817024852-1554185",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210817024852-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210817024852-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210817024852-1554185",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210817024852-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e033c7cdd54e2e8a1b304b25fd2480d92ef219b4f58e99aa66ff497a5733fe77",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50469"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50471"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50470"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e033c7cdd54e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20210817024852-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "014007154d01",
	                        "default-k8s-different-port-20210817024852-1554185"
	                    ],
	                    "NetworkID": "c01a2de90263b8c4a0f0d301a1f3067482e97e685086a463971911b51d3b8270",
	                    "EndpointID": "77f5da90ca9fa3abde61c8a1c95c0bd17d1bf917f80b53a0c6874d8f261f500b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210817024852-1554185 -n default-k8s-different-port-20210817024852-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210817024852-1554185 -n default-k8s-different-port-20210817024852-1554185: exit status 2 (15.765004589s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 02:58:03.876620 1698709 status.go:422] Error apiserver status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-different-port-20210817024852-1554185 logs -n 25
E0817 02:58:14.884368 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:58:31.847965 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 -p default-k8s-different-port-20210817024852-1554185 logs -n 25: exit status 110 (1m0.971999984s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                      Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | force-systemd-env-20210817024449-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:49 UTC | Tue, 17 Aug 2021 02:46:27 UTC |
	|         | force-systemd-env-20210817024449-1554185          |                                                   |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | -v=5 --driver=docker                              |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| -p      | force-systemd-env-20210817024449-1554185          | force-systemd-env-20210817024449-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:46:27 UTC | Tue, 17 Aug 2021 02:46:28 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                                   |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-env-20210817024449-1554185          | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:46:28 UTC | Tue, 17 Aug 2021 02:46:30 UTC |
	|         | force-systemd-env-20210817024449-1554185          |                                                   |         |         |                               |                               |
	| delete  | -p                                                | kubenet-20210817024630-1554185                    | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:46:30 UTC | Tue, 17 Aug 2021 02:46:30 UTC |
	|         | kubenet-20210817024630-1554185                    |                                                   |         |         |                               |                               |
	| delete  | -p                                                | flannel-20210817024630-1554185                    | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:46:30 UTC | Tue, 17 Aug 2021 02:46:31 UTC |
	|         | flannel-20210817024630-1554185                    |                                                   |         |         |                               |                               |
	| delete  | -p                                                | false-20210817024631-1554185                      | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:46:31 UTC | Tue, 17 Aug 2021 02:46:31 UTC |
	|         | false-20210817024631-1554185                      |                                                   |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210817024307-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:44:41 UTC | Tue, 17 Aug 2021 02:47:20 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185         |                                                   |         |         |                               |                               |
	|         | --memory=2200                                     |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| start   | -p                                                | force-systemd-flag-20210817024631-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:46:31 UTC | Tue, 17 Aug 2021 02:47:24 UTC |
	|         | force-systemd-flag-20210817024631-1554185         |                                                   |         |         |                               |                               |
	|         | --memory=2048 --force-systemd                     |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker            |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| -p      | force-systemd-flag-20210817024631-1554185         | force-systemd-flag-20210817024631-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:25 UTC | Tue, 17 Aug 2021 02:47:25 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                                   |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-flag-20210817024631-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:25 UTC | Tue, 17 Aug 2021 02:47:28 UTC |
	|         | force-systemd-flag-20210817024631-1554185         |                                                   |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210817024307-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:20 UTC | Tue, 17 Aug 2021 02:48:02 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185         |                                                   |         |         |                               |                               |
	|         | --memory=2200                                     |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210817024307-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:02 UTC | Tue, 17 Aug 2021 02:48:05 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185         |                                                   |         |         |                               |                               |
	| start   | -p                                                | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:28 UTC | Tue, 17 Aug 2021 02:48:49 UTC |
	|         | cert-options-20210817024728-1554185               |                                                   |         |         |                               |                               |
	|         | --memory=2048                                     |                                                   |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                                   |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                                   |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                                   |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| -p      | cert-options-20210817024728-1554185               | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:49 UTC | Tue, 17 Aug 2021 02:48:49 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                                   |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                                   |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:49 UTC | Tue, 17 Aug 2021 02:48:52 UTC |
	|         | cert-options-20210817024728-1554185               |                                                   |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:05 UTC | Tue, 17 Aug 2021 02:50:20 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                   |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                   |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                   |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:29 UTC | Tue, 17 Aug 2021 02:50:29 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:30 UTC | Tue, 17 Aug 2021 02:50:50 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:50 UTC | Tue, 17 Aug 2021 02:50:50 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:52 UTC | Tue, 17 Aug 2021 02:50:54 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:02 UTC | Tue, 17 Aug 2021 02:51:03 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:03 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:57:04 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:57:15 UTC | Tue, 17 Aug 2021 02:57:15 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 02:51:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 02:51:24.251043 1685977 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:51:24.251152 1685977 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:51:24.251169 1685977 out.go:311] Setting ErrFile to fd 2...
	I0817 02:51:24.251197 1685977 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:51:24.251379 1685977 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:51:24.251651 1685977 out.go:305] Setting JSON to false
	I0817 02:51:24.252664 1685977 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38023,"bootTime":1629130662,"procs":404,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 02:51:24.252748 1685977 start.go:121] virtualization:  
	I0817 02:51:24.256768 1685977 out.go:177] * [default-k8s-different-port-20210817024852-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 02:51:24.256906 1685977 notify.go:169] Checking for updates...
	I0817 02:51:24.259618 1685977 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 02:51:24.261360 1685977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:51:24.263335 1685977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 02:51:24.264950 1685977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 02:51:24.272522 1685977 config.go:177] Loaded profile config "default-k8s-different-port-20210817024852-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:51:24.273311 1685977 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 02:51:24.334374 1685977 docker.go:132] docker version: linux-20.10.8
	I0817 02:51:24.334455 1685977 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:51:24.474543 1685977 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 02:51:24.397405173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:51:24.474647 1685977 docker.go:244] overlay module found
	I0817 02:51:24.476523 1685977 out.go:177] * Using the docker driver based on existing profile
	I0817 02:51:24.476541 1685977 start.go:278] selected driver: docker
	I0817 02:51:24.476547 1685977 start.go:751] validating driver "docker" against &{Name:default-k8s-different-port-20210817024852-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-2021081702485
2-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true sys
tem_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:51:24.476648 1685977 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 02:51:24.476680 1685977 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 02:51:24.476692 1685977 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 02:51:24.478265 1685977 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 02:51:24.478570 1685977 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:51:24.588046 1685977 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 02:51:24.514296213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	W0817 02:51:24.588196 1685977 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 02:51:24.588217 1685977 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 02:51:24.590004 1685977 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 02:51:24.590103 1685977 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 02:51:24.590126 1685977 cni.go:93] Creating CNI manager for ""
	I0817 02:51:24.590133 1685977 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:51:24.590143 1685977 start_flags.go:277] config:
	{Name:default-k8s-different-port-20210817024852-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210817024852-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:51:24.593113 1685977 out.go:177] * Starting control plane node default-k8s-different-port-20210817024852-1554185 in cluster default-k8s-different-port-20210817024852-1554185
	I0817 02:51:24.593147 1685977 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 02:51:24.594951 1685977 out.go:177] * Pulling base image ...
	I0817 02:51:24.594981 1685977 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:51:24.595015 1685977 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 02:51:24.595024 1685977 cache.go:56] Caching tarball of preloaded images
	I0817 02:51:24.595028 1685977 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 02:51:24.595158 1685977 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 02:51:24.595167 1685977 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 02:51:24.595281 1685977 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/config.json ...
	I0817 02:51:24.658896 1685977 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 02:51:24.658919 1685977 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 02:51:24.658935 1685977 cache.go:205] Successfully downloaded all kic artifacts
	I0817 02:51:24.658973 1685977 start.go:313] acquiring machines lock for default-k8s-different-port-20210817024852-1554185: {Name:mka04c3640b00539ca31a06f35f3a83f2a32db60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 02:51:24.659063 1685977 start.go:317] acquired machines lock for "default-k8s-different-port-20210817024852-1554185" in 68.496µs
	I0817 02:51:24.659084 1685977 start.go:93] Skipping create...Using existing machine configuration
	I0817 02:51:24.659090 1685977 fix.go:55] fixHost starting: 
	I0817 02:51:24.659382 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:51:24.710397 1685977 fix.go:108] recreateIfNeeded on default-k8s-different-port-20210817024852-1554185: state=Stopped err=<nil>
	W0817 02:51:24.710419 1685977 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 02:51:21.173334 1683677 api_server.go:255] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 02:51:21.672990 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:25.474293 1683677 api_server.go:265] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 02:51:25.474312 1683677 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 02:51:25.672640 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:25.773285 1683677 api_server.go:265] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 02:51:25.773350 1683677 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 02:51:26.172489 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:26.181455 1683677 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0817 02:51:26.181515 1683677 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0817 02:51:26.672039 1683677 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 02:51:26.684988 1683677 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0817 02:51:26.699432 1683677 api_server.go:139] control plane version: v1.14.0
	I0817 02:51:26.699470 1683677 api_server.go:129] duration metric: took 11.027864328s to wait for apiserver health ...
	I0817 02:51:26.699506 1683677 cni.go:93] Creating CNI manager for ""
	I0817 02:51:26.699525 1683677 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:51:24.713519 1685977 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20210817024852-1554185" ...
	I0817 02:51:24.713573 1685977 cli_runner.go:115] Run: docker start default-k8s-different-port-20210817024852-1554185
	I0817 02:51:25.240385 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:51:25.288731 1685977 kic.go:420] container "default-k8s-different-port-20210817024852-1554185" state is running.
	I0817 02:51:25.289954 1685977 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:25.342199 1685977 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/config.json ...
	I0817 02:51:25.342375 1685977 machine.go:88] provisioning docker machine ...
	I0817 02:51:25.342401 1685977 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20210817024852-1554185"
	I0817 02:51:25.342456 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:25.405467 1685977 main.go:130] libmachine: Using SSH client type: native
	I0817 02:51:25.405642 1685977 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50473 <nil> <nil>}
	I0817 02:51:25.405663 1685977 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210817024852-1554185 && echo "default-k8s-different-port-20210817024852-1554185" | sudo tee /etc/hostname
	I0817 02:51:25.406301 1685977 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48612->127.0.0.1:50473: read: connection reset by peer
	I0817 02:51:28.539738 1685977 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210817024852-1554185
	
	I0817 02:51:28.539847 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:28.584321 1685977 main.go:130] libmachine: Using SSH client type: native
	I0817 02:51:28.584494 1685977 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50473 <nil> <nil>}
	I0817 02:51:28.584525 1685977 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210817024852-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210817024852-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210817024852-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 02:51:28.701964 1685977 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 02:51:28.701987 1685977 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 02:51:28.702012 1685977 ubuntu.go:177] setting up certificates
	I0817 02:51:28.702027 1685977 provision.go:83] configureAuth start
	I0817 02:51:28.702083 1685977 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:28.735488 1685977 provision.go:138] copyHostCerts
	I0817 02:51:28.735544 1685977 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 02:51:28.735556 1685977 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 02:51:28.735612 1685977 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 02:51:28.735688 1685977 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 02:51:28.735699 1685977 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 02:51:28.735721 1685977 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 02:51:28.735769 1685977 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 02:51:28.735780 1685977 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 02:51:28.735804 1685977 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 02:51:28.735842 1685977 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210817024852-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20210817024852-1554185]
	I0817 02:51:26.701474 1683677 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:51:26.701537 1683677 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:51:26.704950 1683677 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.14.0/kubectl ...
	I0817 02:51:26.704967 1683677 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:51:26.716527 1683677 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:51:26.967205 1683677 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:51:26.979532 1683677 system_pods.go:59] 8 kube-system pods found
	I0817 02:51:26.979567 1683677 system_pods.go:61] "coredns-fb8b8dccf-jp8m9" [b23bfd69-ff05-11eb-b750-02420e977974] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 02:51:26.979574 1683677 system_pods.go:61] "etcd-old-k8s-version-20210817024805-1554185" [d5b20d31-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979604 1683677 system_pods.go:61] "kindnet-n5vgl" [b2493cdc-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979609 1683677 system_pods.go:61] "kube-apiserver-old-k8s-version-20210817024805-1554185" [d8145a64-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979621 1683677 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210817024805-1554185" [cfbc7b8c-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979625 1683677 system_pods.go:61] "kube-proxy-nhh5q" [b248f49f-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979630 1683677 system_pods.go:61] "kube-scheduler-old-k8s-version-20210817024805-1554185" [d2b8571e-ff05-11eb-b750-02420e977974] Running
	I0817 02:51:26.979640 1683677 system_pods.go:61] "storage-provisioner" [b32ef806-ff05-11eb-b750-02420e977974] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 02:51:26.979647 1683677 system_pods.go:74] duration metric: took 12.42679ms to wait for pod list to return data ...
	I0817 02:51:26.979670 1683677 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:51:26.982872 1683677 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:51:26.982920 1683677 node_conditions.go:123] node cpu capacity is 2
	I0817 02:51:26.982944 1683677 node_conditions.go:105] duration metric: took 3.261441ms to run NodePressure ...
	I0817 02:51:26.982957 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:27.127614 1683677 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0817 02:51:27.130997 1683677 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0817 02:51:27.503592 1683677 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0817 02:51:27.944862 1683677 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0817 02:51:28.477064 1683677 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0817 02:51:29.262325 1683677 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0817 02:51:29.288766 1685977 provision.go:172] copyRemoteCerts
	I0817 02:51:29.288834 1685977 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 02:51:29.288876 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:29.320404 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:51:29.408982 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 02:51:29.424314 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1314 bytes)
	I0817 02:51:29.440413 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 02:51:29.455873 1685977 provision.go:86] duration metric: configureAuth took 753.829649ms
	I0817 02:51:29.455889 1685977 ubuntu.go:193] setting minikube options for container-runtime
	I0817 02:51:29.456060 1685977 config.go:177] Loaded profile config "default-k8s-different-port-20210817024852-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:51:29.456076 1685977 machine.go:91] provisioned docker machine in 4.113685256s
	I0817 02:51:29.456082 1685977 start.go:267] post-start starting for "default-k8s-different-port-20210817024852-1554185" (driver="docker")
	I0817 02:51:29.456089 1685977 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 02:51:29.456133 1685977 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 02:51:29.456173 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:29.503542 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:51:29.589531 1685977 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 02:51:29.591999 1685977 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 02:51:29.592021 1685977 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 02:51:29.592034 1685977 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 02:51:29.592042 1685977 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 02:51:29.592056 1685977 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 02:51:29.592103 1685977 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 02:51:29.592183 1685977 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 02:51:29.592274 1685977 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 02:51:29.598386 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:51:29.613754 1685977 start.go:270] post-start completed in 157.661872ms
	I0817 02:51:29.613804 1685977 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 02:51:29.613842 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:29.647542 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:51:29.730400 1685977 fix.go:57] fixHost completed within 5.071305921s
	I0817 02:51:29.730418 1685977 start.go:80] releasing machines lock for "default-k8s-different-port-20210817024852-1554185", held for 5.071346118s
	I0817 02:51:29.730493 1685977 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:29.762515 1685977 ssh_runner.go:149] Run: systemctl --version
	I0817 02:51:29.762564 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:29.762576 1685977 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 02:51:29.762626 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:51:29.803101 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:51:29.808589 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:51:30.024912 1685977 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 02:51:30.036264 1685977 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 02:51:30.044957 1685977 docker.go:153] disabling docker service ...
	I0817 02:51:30.045003 1685977 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 02:51:30.054220 1685977 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 02:51:30.063671 1685977 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 02:51:30.138573 1685977 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 02:51:30.222832 1685977 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 02:51:30.231509 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 02:51:30.242655 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 02:51:30.254611 1685977 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 02:51:30.261968 1685977 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 02:51:30.267422 1685977 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 02:51:30.345903 1685977 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 02:51:30.420998 1685977 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 02:51:30.421089 1685977 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 02:51:30.424443 1685977 start.go:413] Will wait 60s for crictl version
	I0817 02:51:30.424503 1685977 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:51:30.448673 1685977 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T02:51:30Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 02:51:30.768179 1683677 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0817 02:51:31.845796 1683677 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0817 02:51:33.718958 1683677 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0817 02:51:36.272473 1683677 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0817 02:51:41.495445 1685977 ssh_runner.go:149] Run: sudo crictl version
	I0817 02:51:41.518517 1685977 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 02:51:41.518571 1685977 ssh_runner.go:149] Run: containerd --version
	I0817 02:51:41.542471 1685977 ssh_runner.go:149] Run: containerd --version
	I0817 02:51:41.564932 1685977 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 02:51:41.565000 1685977 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210817024852-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 02:51:41.596579 1685977 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 02:51:41.599650 1685977 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 02:51:41.607901 1685977 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 02:51:41.607960 1685977 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:51:41.633436 1685977 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:51:41.633455 1685977 containerd.go:517] Images already preloaded, skipping extraction
	I0817 02:51:41.633495 1685977 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 02:51:41.656240 1685977 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 02:51:41.656259 1685977 cache_images.go:74] Images are preloaded, skipping loading
	I0817 02:51:41.656298 1685977 ssh_runner.go:149] Run: sudo crictl info
	I0817 02:51:41.677901 1685977 cni.go:93] Creating CNI manager for ""
	I0817 02:51:41.677922 1685977 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:51:41.677934 1685977 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 02:51:41.677948 1685977 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210817024852-1554185 NodeName:default-k8s-different-port-20210817024852-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.16
8.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 02:51:41.678072 1685977 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210817024852-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 02:51:41.678162 1685977 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210817024852-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210817024852-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0817 02:51:41.678216 1685977 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 02:51:41.686039 1685977 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 02:51:41.686087 1685977 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 02:51:41.696253 1685977 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (594 bytes)
	I0817 02:51:41.707800 1685977 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 02:51:41.719235 1685977 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0817 02:51:41.730312 1685977 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 02:51:41.733438 1685977 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 02:51:41.741372 1685977 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185 for IP: 192.168.49.2
	I0817 02:51:41.741439 1685977 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 02:51:41.741463 1685977 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 02:51:41.741513 1685977 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.key
	I0817 02:51:41.741533 1685977 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/apiserver.key.dd3b5fb2
	I0817 02:51:41.741551 1685977 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/proxy-client.key
	I0817 02:51:41.741665 1685977 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 02:51:41.741714 1685977 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 02:51:41.741729 1685977 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 02:51:41.741757 1685977 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 02:51:41.741788 1685977 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 02:51:41.741815 1685977 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 02:51:41.741874 1685977 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 02:51:41.743037 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 02:51:41.757778 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 02:51:41.772662 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 02:51:41.788238 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 02:51:41.803479 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 02:51:41.818405 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 02:51:41.834290 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 02:51:41.850257 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 02:51:41.865564 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 02:51:41.880436 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 02:51:41.896728 1685977 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 02:51:41.911439 1685977 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 02:51:41.922240 1685977 ssh_runner.go:149] Run: openssl version
	I0817 02:51:41.926694 1685977 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 02:51:41.933350 1685977 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 02:51:41.937922 1685977 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 02:51:41.937983 1685977 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 02:51:41.943999 1685977 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 02:51:41.951862 1685977 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 02:51:41.958563 1685977 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 02:51:41.961196 1685977 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 02:51:41.961239 1685977 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 02:51:41.965518 1685977 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 02:51:41.971667 1685977 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 02:51:41.977893 1685977 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:51:41.980824 1685977 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:51:41.980862 1685977 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 02:51:41.986962 1685977 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 02:51:41.992849 1685977 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210817024852-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210817024852-1554185 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Start
HostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:51:41.992963 1685977 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 02:51:41.993009 1685977 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:51:42.016929 1685977 cri.go:76] found id: "f19ee84b7c94fa694cc46aa4f13704d95553f850e7887991adc2814948d63f41"
	I0817 02:51:42.016949 1685977 cri.go:76] found id: "49df9fab49f87f7e2113469f38d8dabd5ae0608b25a7beb6a20fd67c7c539d05"
	I0817 02:51:42.016955 1685977 cri.go:76] found id: "78c4325f16b23a8e0c2de1c6bab0242fc4abb3c1f4f067ad4d53cc19f9d8c6d3"
	I0817 02:51:42.016959 1685977 cri.go:76] found id: "8ff2e923d4fececca9e36feac69692fefdc6c915178679880a20c2a1d0956edf"
	I0817 02:51:42.016963 1685977 cri.go:76] found id: "708b84fed7b61b21afa376ef8807e544b39450abc93c611d6f112ac4ff06f48e"
	I0817 02:51:42.016969 1685977 cri.go:76] found id: "7278496269401b57811d1a6760d5898522e77d3d73e46421d9bc1e3dd87be48d"
	I0817 02:51:42.016973 1685977 cri.go:76] found id: "41834b1c0478bcecbd69b2ef8b1d5d654426af1909f04dbc4b219acef4a2ecd0"
	I0817 02:51:42.016982 1685977 cri.go:76] found id: "0daab25e4445c73484cec64d8d72127d3a5420b8b54c70d8a155b6fc50297375"
	I0817 02:51:42.016987 1685977 cri.go:76] found id: ""
	I0817 02:51:42.017034 1685977 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 02:51:42.029655 1685977 cri.go:103] JSON = null
	W0817 02:51:42.029691 1685977 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0817 02:51:42.029754 1685977 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 02:51:42.037228 1685977 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 02:51:42.037248 1685977 kubeadm.go:600] restartCluster start
	I0817 02:51:42.037283 1685977 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 02:51:42.042889 1685977 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:42.043683 1685977 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210817024852-1554185" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:51:42.043908 1685977 kubeconfig.go:128] "default-k8s-different-port-20210817024852-1554185" context is missing from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0817 02:51:42.045048 1685977 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:51:42.048853 1685977 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 02:51:42.055696 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:42.055736 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:42.064895 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:42.265222 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:42.265292 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:42.274662 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:42.465981 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:42.466063 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:42.475766 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:42.665067 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:42.665149 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:42.675438 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:42.865606 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:42.865709 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:42.876424 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:43.065771 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:43.065878 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:43.081119 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:43.265472 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:43.265563 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:43.277132 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:43.465453 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:43.465540 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:43.475586 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:43.665924 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:43.666000 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:43.675088 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:43.865406 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:43.865477 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:43.875308 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:44.065630 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:44.065717 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:44.075143 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:41.408402 1683677 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0817 02:51:44.265470 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:44.265569 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:44.274785 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:44.465030 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:44.465084 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:44.474244 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:44.665583 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:44.665667 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:44.674961 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:44.865219 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:44.865274 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:44.874844 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:45.065022 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:45.065109 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:45.075997 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:45.076015 1685977 api_server.go:164] Checking apiserver status ...
	I0817 02:51:45.076059 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 02:51:45.085677 1685977 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:45.085695 1685977 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0817 02:51:45.085702 1685977 kubeadm.go:1032] stopping kube-system containers ...
	I0817 02:51:45.085712 1685977 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 02:51:45.085754 1685977 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:51:45.110329 1685977 cri.go:76] found id: "f19ee84b7c94fa694cc46aa4f13704d95553f850e7887991adc2814948d63f41"
	I0817 02:51:45.110349 1685977 cri.go:76] found id: "49df9fab49f87f7e2113469f38d8dabd5ae0608b25a7beb6a20fd67c7c539d05"
	I0817 02:51:45.110354 1685977 cri.go:76] found id: "78c4325f16b23a8e0c2de1c6bab0242fc4abb3c1f4f067ad4d53cc19f9d8c6d3"
	I0817 02:51:45.110359 1685977 cri.go:76] found id: "8ff2e923d4fececca9e36feac69692fefdc6c915178679880a20c2a1d0956edf"
	I0817 02:51:45.110364 1685977 cri.go:76] found id: "708b84fed7b61b21afa376ef8807e544b39450abc93c611d6f112ac4ff06f48e"
	I0817 02:51:45.110375 1685977 cri.go:76] found id: "7278496269401b57811d1a6760d5898522e77d3d73e46421d9bc1e3dd87be48d"
	I0817 02:51:45.110384 1685977 cri.go:76] found id: "41834b1c0478bcecbd69b2ef8b1d5d654426af1909f04dbc4b219acef4a2ecd0"
	I0817 02:51:45.110388 1685977 cri.go:76] found id: "0daab25e4445c73484cec64d8d72127d3a5420b8b54c70d8a155b6fc50297375"
	I0817 02:51:45.110393 1685977 cri.go:76] found id: ""
	I0817 02:51:45.110399 1685977 cri.go:221] Stopping containers: [f19ee84b7c94fa694cc46aa4f13704d95553f850e7887991adc2814948d63f41 49df9fab49f87f7e2113469f38d8dabd5ae0608b25a7beb6a20fd67c7c539d05 78c4325f16b23a8e0c2de1c6bab0242fc4abb3c1f4f067ad4d53cc19f9d8c6d3 8ff2e923d4fececca9e36feac69692fefdc6c915178679880a20c2a1d0956edf 708b84fed7b61b21afa376ef8807e544b39450abc93c611d6f112ac4ff06f48e 7278496269401b57811d1a6760d5898522e77d3d73e46421d9bc1e3dd87be48d 41834b1c0478bcecbd69b2ef8b1d5d654426af1909f04dbc4b219acef4a2ecd0 0daab25e4445c73484cec64d8d72127d3a5420b8b54c70d8a155b6fc50297375]
	I0817 02:51:45.110444 1685977 ssh_runner.go:149] Run: which crictl
	I0817 02:51:45.113091 1685977 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop f19ee84b7c94fa694cc46aa4f13704d95553f850e7887991adc2814948d63f41 49df9fab49f87f7e2113469f38d8dabd5ae0608b25a7beb6a20fd67c7c539d05 78c4325f16b23a8e0c2de1c6bab0242fc4abb3c1f4f067ad4d53cc19f9d8c6d3 8ff2e923d4fececca9e36feac69692fefdc6c915178679880a20c2a1d0956edf 708b84fed7b61b21afa376ef8807e544b39450abc93c611d6f112ac4ff06f48e 7278496269401b57811d1a6760d5898522e77d3d73e46421d9bc1e3dd87be48d 41834b1c0478bcecbd69b2ef8b1d5d654426af1909f04dbc4b219acef4a2ecd0 0daab25e4445c73484cec64d8d72127d3a5420b8b54c70d8a155b6fc50297375
	I0817 02:51:45.136492 1685977 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 02:51:45.145466 1685977 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 02:51:45.151501 1685977 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 17 02:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 17 02:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2135 Aug 17 02:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 17 02:49 /etc/kubernetes/scheduler.conf
	
	I0817 02:51:45.151546 1685977 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0817 02:51:45.157465 1685977 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0817 02:51:45.163552 1685977 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0817 02:51:45.169169 1685977 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:45.169215 1685977 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 02:51:45.174787 1685977 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0817 02:51:45.180499 1685977 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 02:51:45.180542 1685977 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 02:51:45.186204 1685977 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 02:51:45.192206 1685977 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 02:51:45.192224 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:45.534866 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:47.119038 1685977 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.584142233s)
	I0817 02:51:47.119067 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:47.278495 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:47.375691 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:51:47.451832 1685977 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:51:47.451890 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:47.961867 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:48.461945 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:48.961915 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:49.461719 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:49.962225 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:50.461930 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:50.962264 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:51.462116 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:51.961392 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:52.461331 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:52.962280 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:53.461496 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:53.961570 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:51.169969 1683677 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0817 02:51:54.462008 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:54.961612 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:55.461888 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:55.961367 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:56.461350 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:56.961953 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:51:56.983198 1685977 api_server.go:70] duration metric: took 9.531366948s to wait for apiserver process to appear ...
	I0817 02:51:56.983216 1685977 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:51:56.983225 1685977 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0817 02:52:01.985180 1685977 api_server.go:255] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 02:52:02.485821 1685977 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0817 02:52:03.291602 1685977 api_server.go:265] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 02:52:03.291631 1685977 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 02:52:03.485767 1685977 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0817 02:52:03.500401 1685977 api_server.go:265] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 02:52:03.500428 1685977 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 02:52:03.985704 1685977 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0817 02:52:03.995630 1685977 api_server.go:265] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 02:52:03.995684 1685977 api_server.go:101] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 02:52:04.485313 1685977 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0817 02:52:04.493600 1685977 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0817 02:52:04.506676 1685977 api_server.go:139] control plane version: v1.21.3
	I0817 02:52:04.506692 1685977 api_server.go:129] duration metric: took 7.523470676s to wait for apiserver health ...
	I0817 02:52:04.506701 1685977 cni.go:93] Creating CNI manager for ""
	I0817 02:52:04.506709 1685977 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:52:04.508614 1685977 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:52:04.508667 1685977 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:52:04.511864 1685977 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 02:52:04.511881 1685977 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:52:04.524401 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:52:04.815348 1685977 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:52:04.829004 1685977 system_pods.go:59] 9 kube-system pods found
	I0817 02:52:04.829034 1685977 system_pods.go:61] "coredns-558bd4d5db-5nznw" [518c8755-cec2-4876-aa9a-9bd786980d36] Running
	I0817 02:52:04.829040 1685977 system_pods.go:61] "etcd-default-k8s-different-port-20210817024852-1554185" [6a3bb1e7-c1cf-4441-82ac-05ffd6545931] Running
	I0817 02:52:04.829048 1685977 system_pods.go:61] "kindnet-kg587" [58fc2d27-5f85-413f-934b-9000fec9f47a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 02:52:04.829054 1685977 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210817024852-1554185" [b67aed7d-0edc-4cf6-ad11-b449ae991ea1] Running
	I0817 02:52:04.829064 1685977 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" [c71fc635-9e84-43de-b5c2-38167b8cb62a] Running
	I0817 02:52:04.829069 1685977 system_pods.go:61] "kube-proxy-ldzzq" [307f545c-fc74-4292-abb7-77375fc4d06d] Running
	I0817 02:52:04.829078 1685977 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210817024852-1554185" [15448286-e6ef-46f8-9cd8-3c3681b4b794] Running
	I0817 02:52:04.829084 1685977 system_pods.go:61] "metrics-server-7c784ccb57-cr7nj" [b25c2ae0-c8e5-4b37-972d-2caac1a37d0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 02:52:04.829096 1685977 system_pods.go:61] "storage-provisioner" [6fbd6fee-aa3d-47cb-9b75-2b7d3a25ba1e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 02:52:04.829102 1685977 system_pods.go:74] duration metric: took 13.739007ms to wait for pod list to return data ...
	I0817 02:52:04.829113 1685977 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:52:04.836158 1685977 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:52:04.836182 1685977 node_conditions.go:123] node cpu capacity is 2
	I0817 02:52:04.836193 1685977 node_conditions.go:105] duration metric: took 7.076434ms to run NodePressure ...
	I0817 02:52:04.836206 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 02:52:05.092298 1685977 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0817 02:52:05.098183 1685977 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0817 02:52:05.463335 1685977 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0817 02:52:05.904251 1685977 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0817 02:52:06.435734 1685977 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0817 02:52:07.220388 1685977 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0817 02:52:08.726608 1685977 kubeadm.go:746] kubelet initialised
	I0817 02:52:08.726629 1685977 kubeadm.go:747] duration metric: took 3.634311049s waiting for restarted kubelet to initialise ...
	I0817 02:52:08.726635 1685977 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:52:08.731053 1685977 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-5nznw" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:10.112475 1683677 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0817 02:52:10.746370 1685977 pod_ready.go:102] pod "coredns-558bd4d5db-5nznw" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:11.750027 1685977 pod_ready.go:92] pod "coredns-558bd4d5db-5nznw" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:11.750059 1685977 pod_ready.go:81] duration metric: took 3.018980591s waiting for pod "coredns-558bd4d5db-5nznw" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:11.750069 1685977 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.348375 1685977 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:13.348400 1685977 pod_ready.go:81] duration metric: took 1.598321976s waiting for pod "etcd-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.348413 1685977 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.352181 1685977 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:13.352201 1685977 pod_ready.go:81] duration metric: took 3.780558ms waiting for pod "kube-apiserver-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.352211 1685977 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.355850 1685977 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:13.355867 1685977 pod_ready.go:81] duration metric: took 3.648694ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.355876 1685977 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ldzzq" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.359651 1685977 pod_ready.go:92] pod "kube-proxy-ldzzq" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:13.359669 1685977 pod_ready.go:81] duration metric: took 3.785818ms waiting for pod "kube-proxy-ldzzq" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:13.359679 1685977 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:14.368765 1685977 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:14.368790 1685977 pod_ready.go:81] duration metric: took 1.009102089s waiting for pod "kube-scheduler-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:14.368800 1685977 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:16.548731 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:18.548909 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:21.049710 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:23.548889 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:25.549067 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:28.049511 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:25.563354 1683677 kubeadm.go:746] kubelet initialised
	I0817 02:52:25.563374 1683677 kubeadm.go:747] duration metric: took 58.435709937s waiting for restarted kubelet to initialise ...
	I0817 02:52:25.563381 1683677 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:52:25.568074 1683677 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-jp8m9" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.577170 1683677 pod_ready.go:92] pod "coredns-fb8b8dccf-jp8m9" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.577194 1683677 pod_ready.go:81] duration metric: took 9.090528ms waiting for pod "coredns-fb8b8dccf-jp8m9" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.577204 1683677 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-mcnc6" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.580884 1683677 pod_ready.go:92] pod "coredns-fb8b8dccf-mcnc6" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.580901 1683677 pod_ready.go:81] duration metric: took 3.691246ms waiting for pod "coredns-fb8b8dccf-mcnc6" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.580908 1683677 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.584951 1683677 pod_ready.go:92] pod "etcd-old-k8s-version-20210817024805-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.584969 1683677 pod_ready.go:81] duration metric: took 4.05418ms waiting for pod "etcd-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.584979 1683677 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.588894 1683677 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210817024805-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.588913 1683677 pod_ready.go:81] duration metric: took 3.925311ms waiting for pod "kube-apiserver-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.588922 1683677 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.961580 1683677 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210817024805-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:25.961598 1683677 pod_ready.go:81] duration metric: took 372.668062ms waiting for pod "kube-controller-manager-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:25.961609 1683677 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nhh5q" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:26.362231 1683677 pod_ready.go:92] pod "kube-proxy-nhh5q" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:26.362251 1683677 pod_ready.go:81] duration metric: took 400.63554ms waiting for pod "kube-proxy-nhh5q" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:26.362261 1683677 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:26.761518 1683677 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210817024805-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:52:26.761538 1683677 pod_ready.go:81] duration metric: took 399.268628ms waiting for pod "kube-scheduler-old-k8s-version-20210817024805-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:26.761549 1683677 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace to be "Ready" ...
	I0817 02:52:29.166693 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:30.549224 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:33.049782 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:31.166878 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:33.166960 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:35.549390 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:38.050107 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:35.667458 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:38.167056 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:40.167129 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:40.548895 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:43.050133 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:42.665923 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:44.674856 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:45.548666 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:47.548790 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:47.166577 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:49.167004 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:50.049426 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:52.548979 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:51.666930 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:54.166024 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:55.048240 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:57.048958 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:59.049140 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:56.167473 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:52:58.667062 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:01.049548 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:03.054245 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:01.166392 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:03.166660 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:05.549727 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:08.049176 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:05.667248 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:08.167175 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:10.167277 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:10.050022 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:12.549231 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:12.666646 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:14.667107 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:15.049207 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:17.049576 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:19.054040 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:17.166790 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:19.666869 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:21.548395 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:24.049554 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:21.666936 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:23.667266 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:26.549088 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:29.049801 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:25.667896 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:28.166786 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:30.166991 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:31.050272 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:33.548283 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:32.666672 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:34.674735 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:35.548389 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:37.549130 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:37.166927 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:39.667330 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:40.048932 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:42.050111 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:42.167056 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:44.666646 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:44.548604 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:46.549481 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:49.048921 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:46.667864 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:49.166934 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:51.548413 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:54.050582 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:51.667345 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:54.166496 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:56.549207 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:59.049117 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:56.167389 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:53:58.666838 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:01.548661 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:03.548776 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:00.666895 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:03.166942 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:05.549284 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:08.049674 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:05.666841 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:08.166343 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:10.167428 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:10.548566 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:12.548793 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:12.666920 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:15.167557 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:15.048904 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:17.548621 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:17.667314 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:20.167520 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:19.548686 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:21.549417 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:24.049484 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:22.667134 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:25.167010 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:26.548211 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:28.548976 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:27.167117 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:29.666570 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:30.549059 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:33.049494 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:32.166673 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:34.167205 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:35.070574 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:37.548051 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:36.167278 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:38.666736 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:39.548395 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:42.049591 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:41.166234 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:43.167351 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:44.547941 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:46.548909 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:49.049493 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:45.666902 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:48.166473 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:50.167180 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:51.548264 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:53.548677 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:52.666625 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:55.166893 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:56.048889 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:58.051782 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:57.167206 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:54:59.667203 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:00.548600 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:03.049154 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:02.166246 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:04.166904 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:05.548239 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:07.549020 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:06.167605 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:08.666362 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:10.049433 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:12.052731 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:10.666627 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:13.166461 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:14.549237 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:16.549496 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:19.049161 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:15.666987 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:17.667555 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:20.167216 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:21.548251 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:24.048643 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:22.666194 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:24.670235 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:26.048790 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:28.548918 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:27.166353 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:29.166945 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:31.049262 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:33.548201 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:31.666651 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:34.166901 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:35.552149 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:38.048626 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:36.666743 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:39.166482 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:40.050022 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:42.052127 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:41.167158 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:43.176080 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:44.550109 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:47.050193 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:45.666970 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:48.167040 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:49.548516 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:52.048827 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:50.667197 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:53.167284 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:54.549177 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:57.048920 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:55.666563 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:58.165868 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:00.166547 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:55:59.548365 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:02.048722 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:04.049389 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:02.169578 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:04.666593 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:06.049778 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:08.548472 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:06.666772 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:08.668718 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:10.548795 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:13.048705 1685977 pod_ready.go:102] pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:11.167180 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:13.666474 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:14.545138 1685977 pod_ready.go:81] duration metric: took 4m0.176323682s waiting for pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace to be "Ready" ...
	E0817 02:56:14.545160 1685977 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-cr7nj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0817 02:56:14.545176 1685977 pod_ready.go:38] duration metric: took 4m5.818529856s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:56:14.545204 1685977 kubeadm.go:604] restartCluster took 4m32.507951116s
	W0817 02:56:14.545314 1685977 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 02:56:14.545348 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0817 02:56:16.599387 1685977 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.054018506s)
	I0817 02:56:16.599456 1685977 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0817 02:56:16.608972 1685977 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 02:56:16.609032 1685977 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:56:16.631456 1685977 cri.go:76] found id: ""
	I0817 02:56:16.631506 1685977 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 02:56:16.639516 1685977 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 02:56:16.639567 1685977 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 02:56:16.645552 1685977 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 02:56:16.645586 1685977 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 02:56:16.972170 1685977 out.go:204]   - Generating certificates and keys ...
	I0817 02:56:18.696663 1685977 out.go:204]   - Booting up control plane ...
	I0817 02:56:15.667122 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:17.667179 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:19.667252 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:22.166690 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:24.167040 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:26.666042 1683677 pod_ready.go:102] pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:27.162685 1683677 pod_ready.go:81] duration metric: took 4m0.40112198s waiting for pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace to be "Ready" ...
	E0817 02:56:27.162707 1683677 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-pfsdf" in "kube-system" namespace to be "Ready" (will not retry!)
	I0817 02:56:27.162724 1683677 pod_ready.go:38] duration metric: took 4m1.599333201s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:56:27.162750 1683677 kubeadm.go:604] restartCluster took 5m19.132650156s
	W0817 02:56:27.162885 1683677 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 02:56:27.162914 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0817 02:56:29.771314 1683677 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.608376314s)
	I0817 02:56:29.771371 1683677 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0817 02:56:29.783800 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 02:56:29.783862 1683677 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 02:56:29.828158 1683677 cri.go:76] found id: ""
	I0817 02:56:29.828206 1683677 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 02:56:29.841550 1683677 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 02:56:29.841592 1683677 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 02:56:29.851739 1683677 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 02:56:29.851771 1683677 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 02:56:30.528257 1683677 out.go:204]   - Generating certificates and keys ...
	I0817 02:56:34.213997 1683677 out.go:204]   - Booting up control plane ...
	I0817 02:56:40.265745 1685977 out.go:204]   - Configuring RBAC rules ...
	I0817 02:56:40.719797 1685977 cni.go:93] Creating CNI manager for ""
	I0817 02:56:40.719816 1685977 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 02:56:40.721573 1685977 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 02:56:40.721629 1685977 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 02:56:40.725046 1685977 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 02:56:40.725060 1685977 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 02:56:40.745867 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 02:56:41.043942 1685977 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 02:56:41.044090 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:41.044170 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=default-k8s-different-port-20210817024852-1554185 minikube.k8s.io/updated_at=2021_08_17T02_56_41_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:41.065176 1685977 ops.go:34] apiserver oom_adj: -16
	I0817 02:56:41.177959 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:41.757346 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:42.256794 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:42.756791 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:43.257487 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:43.757647 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:44.256760 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:44.757032 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:45.257420 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:45.757557 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:46.256819 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:46.757492 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:47.257361 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:47.756853 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:48.257270 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:48.756824 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:49.257196 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:49.756775 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:50.257140 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:50.757021 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:51.257491 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:51.757426 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:52.257111 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:52.756899 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:53.257803 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:53.756814 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:54.257291 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:54.757459 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:55.257423 1685977 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 02:56:55.349810 1685977 kubeadm.go:985] duration metric: took 14.305777046s to wait for elevateKubeSystemPrivileges.
	I0817 02:56:55.349836 1685977 kubeadm.go:392] StartCluster complete in 5m13.356990711s
	I0817 02:56:55.349852 1685977 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:56:55.349932 1685977 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:56:55.350990 1685977 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 02:56:55.873850 1685977 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210817024852-1554185" rescaled to 1
	I0817 02:56:55.873900 1685977 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 02:56:55.875535 1685977 out.go:177] * Verifying Kubernetes components...
	I0817 02:56:55.873948 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 02:56:55.875608 1685977 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:56:55.874174 1685977 config.go:177] Loaded profile config "default-k8s-different-port-20210817024852-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:56:55.874189 1685977 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0817 02:56:55.875761 1685977 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210817024852-1554185"
	I0817 02:56:55.875784 1685977 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210817024852-1554185"
	I0817 02:56:55.875807 1685977 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210817024852-1554185"
	I0817 02:56:55.875852 1685977 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210817024852-1554185"
	I0817 02:56:55.875874 1685977 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210817024852-1554185"
	W0817 02:56:55.875892 1685977 addons.go:147] addon dashboard should already be in state true
	I0817 02:56:55.875924 1685977 host.go:66] Checking if "default-k8s-different-port-20210817024852-1554185" exists ...
	I0817 02:56:55.876181 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:56:55.876436 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:56:55.875789 1685977 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210817024852-1554185"
	W0817 02:56:55.876568 1685977 addons.go:147] addon storage-provisioner should already be in state true
	I0817 02:56:55.876587 1685977 host.go:66] Checking if "default-k8s-different-port-20210817024852-1554185" exists ...
	I0817 02:56:55.876999 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:56:55.875810 1685977 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210817024852-1554185"
	I0817 02:56:55.877052 1685977 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210817024852-1554185"
	W0817 02:56:55.877058 1685977 addons.go:147] addon metrics-server should already be in state true
	I0817 02:56:55.877072 1685977 host.go:66] Checking if "default-k8s-different-port-20210817024852-1554185" exists ...
	I0817 02:56:55.877456 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:56:55.995544 1685977 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210817024852-1554185"
	W0817 02:56:55.995564 1685977 addons.go:147] addon default-storageclass should already be in state true
	I0817 02:56:55.995586 1685977 host.go:66] Checking if "default-k8s-different-port-20210817024852-1554185" exists ...
	I0817 02:56:55.996023 1685977 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210817024852-1554185 --format={{.State.Status}}
	I0817 02:56:56.015954 1685977 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0817 02:56:56.016016 1685977 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 02:56:56.016025 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 02:56:56.016078 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:56:56.019888 1685977 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0817 02:56:56.022681 1685977 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0817 02:56:56.022730 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0817 02:56:56.022738 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0817 02:56:56.022784 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:56:56.059397 1685977 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 02:56:56.059492 1685977 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:56:56.059501 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 02:56:56.059554 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:56:56.082725 1685977 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210817024852-1554185" to be "Ready" ...
	I0817 02:56:56.083763 1685977 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 02:56:56.084745 1685977 node_ready.go:49] node "default-k8s-different-port-20210817024852-1554185" has status "Ready":"True"
	I0817 02:56:56.084758 1685977 node_ready.go:38] duration metric: took 2.009433ms waiting for node "default-k8s-different-port-20210817024852-1554185" to be "Ready" ...
	I0817 02:56:56.084769 1685977 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:56:56.096809 1685977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-8rfj4" in "kube-system" namespace to be "Ready" ...
	I0817 02:56:56.149127 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:56:56.191058 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:56:56.191637 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:56:56.192709 1685977 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 02:56:56.192724 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 02:56:56.192771 1685977 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210817024852-1554185
	I0817 02:56:56.248139 1685977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50473 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210817024852-1554185/id_rsa Username:docker}
	I0817 02:56:56.330588 1685977 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 02:56:56.362001 1685977 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 02:56:56.538907 1685977 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 02:56:56.538927 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0817 02:56:56.545738 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0817 02:56:56.545757 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0817 02:56:56.644902 1685977 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 02:56:56.644922 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 02:56:56.663120 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0817 02:56:56.663138 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0817 02:56:56.809832 1685977 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 02:56:56.809854 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 02:56:56.891456 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0817 02:56:56.891478 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0817 02:56:57.116106 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0817 02:56:57.116129 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0817 02:56:57.141884 1685977 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 02:56:57.165409 1685977 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.08162096s)
	I0817 02:56:57.165436 1685977 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0817 02:56:57.213794 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0817 02:56:57.213816 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0817 02:56:57.316053 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0817 02:56:57.316075 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0817 02:56:57.434942 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0817 02:56:57.434964 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0817 02:56:57.511781 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0817 02:56:57.511802 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0817 02:56:57.621303 1685977 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 02:56:57.621326 1685977 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0817 02:56:57.627698 1685977 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.297055547s)
	I0817 02:56:57.627784 1685977 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.265762885s)
	I0817 02:56:57.653417 1685977 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 02:56:58.257285 1685977 pod_ready.go:102] pod "coredns-558bd4d5db-8rfj4" in "kube-system" namespace has status "Ready":"False"
	I0817 02:56:58.515929 1685977 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.374011036s)
	I0817 02:56:58.515960 1685977 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210817024852-1554185"
	I0817 02:56:59.006667 1685977 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.353175904s)
	I0817 02:56:59.008617 1685977 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0817 02:56:59.008637 1685977 addons.go:344] enableAddons completed in 3.134454092s
	I0817 02:56:59.619976 1685977 pod_ready.go:92] pod "coredns-558bd4d5db-8rfj4" in "kube-system" namespace has status "Ready":"True"
	I0817 02:56:59.620007 1685977 pod_ready.go:81] duration metric: took 3.52314652s waiting for pod "coredns-558bd4d5db-8rfj4" in "kube-system" namespace to be "Ready" ...
	I0817 02:56:59.620032 1685977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-wtlfm" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:01.635272 1685977 pod_ready.go:102] pod "coredns-558bd4d5db-wtlfm" in "kube-system" namespace has status "Ready":"False"
	I0817 02:57:03.626539 1685977 pod_ready.go:97] error getting pod "coredns-558bd4d5db-wtlfm" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-wtlfm" not found
	I0817 02:57:03.626566 1685977 pod_ready.go:81] duration metric: took 4.006519971s waiting for pod "coredns-558bd4d5db-wtlfm" in "kube-system" namespace to be "Ready" ...
	E0817 02:57:03.626577 1685977 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-wtlfm" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-wtlfm" not found
	I0817 02:57:03.626585 1685977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.637703 1685977 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:57:03.637721 1685977 pod_ready.go:81] duration metric: took 11.127045ms waiting for pod "etcd-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.637735 1685977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.642094 1685977 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:57:03.642111 1685977 pod_ready.go:81] duration metric: took 4.368458ms waiting for pod "kube-apiserver-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.642121 1685977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.647863 1685977 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:57:03.647880 1685977 pod_ready.go:81] duration metric: took 5.750638ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.647891 1685977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5mnnj" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.653034 1685977 pod_ready.go:92] pod "kube-proxy-5mnnj" in "kube-system" namespace has status "Ready":"True"
	I0817 02:57:03.653050 1685977 pod_ready.go:81] duration metric: took 5.152114ms waiting for pod "kube-proxy-5mnnj" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.653058 1685977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.827111 1685977 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 02:57:03.827170 1685977 pod_ready.go:81] duration metric: took 174.102098ms waiting for pod "kube-scheduler-default-k8s-different-port-20210817024852-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 02:57:03.827191 1685977 pod_ready.go:38] duration metric: took 7.7424112s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 02:57:03.827217 1685977 api_server.go:50] waiting for apiserver process to appear ...
	I0817 02:57:03.827284 1685977 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:57:03.850493 1685977 api_server.go:70] duration metric: took 7.976564897s to wait for apiserver process to appear ...
	I0817 02:57:03.850552 1685977 api_server.go:86] waiting for apiserver healthz status ...
	I0817 02:57:03.850574 1685977 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0817 02:57:03.859078 1685977 api_server.go:265] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0817 02:57:03.859754 1685977 api_server.go:139] control plane version: v1.21.3
	I0817 02:57:03.859770 1685977 api_server.go:129] duration metric: took 9.200861ms to wait for apiserver health ...
	I0817 02:57:03.859776 1685977 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 02:57:04.030424 1685977 system_pods.go:59] 9 kube-system pods found
	I0817 02:57:04.030522 1685977 system_pods.go:61] "coredns-558bd4d5db-8rfj4" [c0122f9e-6b2a-4ee2-ae8a-3985bbc5160d] Running
	I0817 02:57:04.030541 1685977 system_pods.go:61] "etcd-default-k8s-different-port-20210817024852-1554185" [15ccf197-37be-47d8-9a87-49c904ab5e74] Running
	I0817 02:57:04.030559 1685977 system_pods.go:61] "kindnet-jvbx9" [c3ef7b0d-aa4d-431f-85c3-eec88c3223bc] Running
	I0817 02:57:04.030576 1685977 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210817024852-1554185" [242d416f-9292-4cbd-b182-8aafe6cec200] Running
	I0817 02:57:04.030610 1685977 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" [aeab199a-072d-4b0b-bf31-d78123bb018f] Running
	I0817 02:57:04.030628 1685977 system_pods.go:61] "kube-proxy-5mnnj" [6b672c7a-ea5e-4ef4-932c-95a01336037e] Running
	I0817 02:57:04.030645 1685977 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210817024852-1554185" [4fdad56d-4f2b-4cc1-8708-def6cb1f3602] Running
	I0817 02:57:04.030664 1685977 system_pods.go:61] "metrics-server-7c784ccb57-67mmz" [d68b6163-f479-44ce-b297-206cc3375f8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 02:57:04.030690 1685977 system_pods.go:61] "storage-provisioner" [b4483bc5-0558-4d83-96e9-b61e6cb235ae] Running
	I0817 02:57:04.030714 1685977 system_pods.go:74] duration metric: took 170.930413ms to wait for pod list to return data ...
	I0817 02:57:04.030731 1685977 default_sa.go:34] waiting for default service account to be created ...
	I0817 02:57:04.227758 1685977 default_sa.go:45] found service account: "default"
	I0817 02:57:04.227782 1685977 default_sa.go:55] duration metric: took 197.046322ms for default service account to be created ...
	I0817 02:57:04.227790 1685977 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 02:57:04.430644 1685977 system_pods.go:86] 9 kube-system pods found
	I0817 02:57:04.430674 1685977 system_pods.go:89] "coredns-558bd4d5db-8rfj4" [c0122f9e-6b2a-4ee2-ae8a-3985bbc5160d] Running
	I0817 02:57:04.430681 1685977 system_pods.go:89] "etcd-default-k8s-different-port-20210817024852-1554185" [15ccf197-37be-47d8-9a87-49c904ab5e74] Running
	I0817 02:57:04.430687 1685977 system_pods.go:89] "kindnet-jvbx9" [c3ef7b0d-aa4d-431f-85c3-eec88c3223bc] Running
	I0817 02:57:04.430693 1685977 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210817024852-1554185" [242d416f-9292-4cbd-b182-8aafe6cec200] Running
	I0817 02:57:04.430698 1685977 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210817024852-1554185" [aeab199a-072d-4b0b-bf31-d78123bb018f] Running
	I0817 02:57:04.430703 1685977 system_pods.go:89] "kube-proxy-5mnnj" [6b672c7a-ea5e-4ef4-932c-95a01336037e] Running
	I0817 02:57:04.430709 1685977 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210817024852-1554185" [4fdad56d-4f2b-4cc1-8708-def6cb1f3602] Running
	I0817 02:57:04.430723 1685977 system_pods.go:89] "metrics-server-7c784ccb57-67mmz" [d68b6163-f479-44ce-b297-206cc3375f8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 02:57:04.430734 1685977 system_pods.go:89] "storage-provisioner" [b4483bc5-0558-4d83-96e9-b61e6cb235ae] Running
	I0817 02:57:04.430741 1685977 system_pods.go:126] duration metric: took 202.94681ms to wait for k8s-apps to be running ...
	I0817 02:57:04.430752 1685977 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 02:57:04.430797 1685977 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:57:04.440008 1685977 system_svc.go:56] duration metric: took 9.251797ms WaitForService to wait for kubelet.
	I0817 02:57:04.440053 1685977 kubeadm.go:547] duration metric: took 8.566127551s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 02:57:04.440080 1685977 node_conditions.go:102] verifying NodePressure condition ...
	I0817 02:57:04.633083 1685977 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 02:57:04.633108 1685977 node_conditions.go:123] node cpu capacity is 2
	I0817 02:57:04.633120 1685977 node_conditions.go:105] duration metric: took 193.033817ms to run NodePressure ...
	I0817 02:57:04.633130 1685977 start.go:231] waiting for startup goroutines ...
	I0817 02:57:04.716921 1685977 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 02:57:04.719057 1685977 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210817024852-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	49c1c104eec15       523cad1a4df73       59 seconds ago       Exited              dashboard-metrics-scraper   1                   ab391ac8df2d6
	c59361d49c6ac       85e6c0cff043f       About a minute ago   Running             kubernetes-dashboard        0                   208e67c0b68d2
	69ead03de4d0c       ba04bb24b9575       About a minute ago   Exited              storage-provisioner         0                   0506d9abb91c5
	99869cae05700       4ea38350a1beb       About a minute ago   Running             kube-proxy                  0                   07d92f4977907
	9c7e6513031e2       f37b7c809e5dc       About a minute ago   Running             kindnet-cni                 0                   9682c574efcad
	1bdff89355457       1a1f05a2cd7c2       About a minute ago   Running             coredns                     0                   c4296c2c68eef
	2a39eb75e1b7a       05b738aa1bc63       About a minute ago   Running             etcd                        0                   d9d56919ab979
	18cab4bfea9e2       31a3b96cefc1e       About a minute ago   Running             kube-scheduler              0                   a279c06813b2d
	ac51e68317b3c       cb310ff289d79       About a minute ago   Running             kube-controller-manager     0                   4275fafdcad20
	8881511682909       44a6d50ef170d       About a minute ago   Running             kube-apiserver              0                   4a4578b9de7d2
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 02:51:25 UTC, end at Tue 2021-08-17 02:58:04 UTC. --
	Aug 17 02:57:04 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:04.546054197Z" level=info msg="Finish piping stdout of container \"266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2\""
	Aug 17 02:57:04 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:04.546079690Z" level=info msg="Finish piping stderr of container \"266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2\""
	Aug 17 02:57:04 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:04.550923064Z" level=info msg="StartContainer for \"266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2\" returns successfully"
	Aug 17 02:57:04 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:04.551059645Z" level=info msg="TaskExit event &TaskExit{ContainerID:266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2,ID:266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2,Pid:5938,ExitStatus:1,ExitedAt:2021-08-17 02:57:04.548271349 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 02:57:04 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:04.583433307Z" level=info msg="shim disconnected" id=266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2
	Aug 17 02:57:04 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:04.583588497Z" level=error msg="copy shim log" error="read /proc/self/fd/145: file already closed"
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.241580825Z" level=info msg="CreateContainer within sandbox \"ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.261856177Z" level=info msg="CreateContainer within sandbox \"ab391ac8df2d6e364d74cf7cb06098450113ae896c0281f46ecfefef3a312683\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37\""
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.262940072Z" level=info msg="StartContainer for \"49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37\""
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.343337587Z" level=info msg="Finish piping stderr of container \"49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37\""
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.343515316Z" level=info msg="Finish piping stdout of container \"49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37\""
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.346962585Z" level=info msg="StartContainer for \"49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37\" returns successfully"
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.347049050Z" level=info msg="TaskExit event &TaskExit{ContainerID:49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37,ID:49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37,Pid:6021,ExitStatus:1,ExitedAt:2021-08-17 02:57:05.344677561 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.373819181Z" level=info msg="shim disconnected" id=49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:05.373984242Z" level=error msg="copy shim log" error="read /proc/self/fd/145: file already closed"
	Aug 17 02:57:06 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:06.237102664Z" level=info msg="RemoveContainer for \"266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2\""
	Aug 17 02:57:06 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:06.242416638Z" level=info msg="RemoveContainer for \"266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2\" returns successfully"
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:15.136595750Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:15.140555448Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:15.141900034Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Aug 17 02:57:27 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:27.692806698Z" level=info msg="Finish piping stderr of container \"69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8\""
	Aug 17 02:57:27 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:27.692949786Z" level=info msg="Finish piping stdout of container \"69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8\""
	Aug 17 02:57:27 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:27.694784107Z" level=info msg="TaskExit event &TaskExit{ContainerID:69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8,ID:69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8,Pid:5603,ExitStatus:255,ExitedAt:2021-08-17 02:57:27.694351036 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 02:57:27 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:27.722785241Z" level=info msg="shim disconnected" id=69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8
	Aug 17 02:57:27 default-k8s-different-port-20210817024852-1554185 containerd[343]: time="2021-08-17T02:57:27.723005916Z" level=error msg="copy shim log" error="read /proc/self/fd/116: file already closed"
	
	* 
	* ==> coredns [1bdff893554576ef1893e3a48f635f4e83a35283fe872906ce488f5f5098db18] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> etcd [2a39eb75e1b7aa47392d91c5a1a6294d81ab1ee82c8991fdd0e04b3ed8363e45] <==
	* raft2021/08/17 02:56:28 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 02:56:28.959764 W | auth: simple token is not cryptographically signed
	2021-08-17 02:56:28.979366 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-17 02:56:28.986984 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/17 02:56:28 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 02:56:28.987295 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-17 02:56:28.990867 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 02:56:28.990996 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-17 02:56:28.991084 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/17 02:56:29 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 02:56:29 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 02:56:29 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 02:56:29 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 02:56:29 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 02:56:29.754369 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 02:56:29.759447 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 02:56:29.759499 I | etcdserver: published {Name:default-k8s-different-port-20210817024852-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 02:56:29.759509 I | embed: ready to serve client requests
	2021-08-17 02:56:29.763696 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 02:56:29.763803 I | embed: ready to serve client requests
	2021-08-17 02:56:29.765803 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 02:56:29.819959 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 02:56:52.218658 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:56:57.675064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 02:57:07.670759 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  02:59:04 up 10:41,  0 users,  load average: 2.60, 1.86, 1.68
	Linux default-k8s-different-port-20210817024852-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [88815116829098f27fddff1e90671799408cf4c3f5c453292e9764e29733e173] <==
	* W0817 02:59:01.206653       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:02.369030       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:02.424722       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:02.841880       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:03.330502       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:03.486483       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:03.523303       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:03.602759       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:03.774306       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:03.854941       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:03.920965       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:03.930238       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:03.941526       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:04.029964       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:04.109499       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:04.246109       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	W0817 02:59:04.355675       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
	I0817 02:59:04.578472       1 trace.go:205] Trace[1152001851]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (17-Aug-2021 02:58:04.579) (total time: 59999ms):
	Trace[1152001851]: [59.999026907s] [59.999026907s] END
	E0817 02:59:04.578497       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0817 02:59:04.578721       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0817 02:59:04.579868       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0817 02:59:04.580950       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0817 02:59:04.581958       1 trace.go:205] Trace[897997808]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/arm64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (17-Aug-2021 02:58:04.579) (total time: 60002ms):
	Trace[897997808]: [1m0.002523873s] [1m0.002523873s] END
	
	* 
	* ==> kube-controller-manager [ac51e68317b3c5744d0bf10680da023a4975a1d084a45b8c1439187380a5617a] <==
	* E0817 02:56:58.691392       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 02:56:58.698676       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 02:56:58.699092       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 02:56:58.707169       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 02:56:58.707531       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 02:56:58.715412       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 02:56:58.715741       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 02:56:58.715883       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 02:56:58.716005       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 02:56:58.729273       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 02:56:58.729396       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 02:56:58.736111       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 02:56:58.736947       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 02:56:58.797910       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-h5wgx"
	I0817 02:56:58.849622       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-twxcq"
	I0817 02:56:59.080534       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0817 02:57:24.403599       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 02:57:24.941857       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 02:57:54.422887       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 02:57:54.963282       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 02:58:24.461749       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 02:58:25.032160       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 02:58:28.128949       1 node_lifecycle_controller.go:1107] Error updating node default-k8s-different-port-20210817024852-1554185: Timeout: request did not complete within requested timeout context deadline exceeded
	E0817 02:58:54.499945       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 02:58:55.056815       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [99869cae0570079ecb2d7a6de0fc69c4f5f8d79bfdb2634e628823165c28384b] <==
	* I0817 02:56:57.457781       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 02:56:57.457830       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 02:56:57.457854       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 02:56:57.506259       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 02:56:57.506290       1 server_others.go:212] Using iptables Proxier.
	I0817 02:56:57.512218       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 02:56:57.512251       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 02:56:57.512520       1 server.go:643] Version: v1.21.3
	I0817 02:56:57.521149       1 config.go:315] Starting service config controller
	I0817 02:56:57.521165       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 02:56:57.521189       1 config.go:224] Starting endpoint slice config controller
	I0817 02:56:57.521192       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 02:56:57.539450       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 02:56:57.543041       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 02:56:57.622884       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 02:56:57.622946       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [18cab4bfea9e2c666be7066d3bb47707c9ff81979391b21edd7a88e4ccca302b] <==
	* W0817 02:56:37.860034       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 02:56:37.980720       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0817 02:56:37.981365       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 02:56:37.981384       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 02:56:37.981406       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 02:56:37.991062       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 02:56:37.997958       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 02:56:37.998053       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 02:56:37.998115       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 02:56:37.998348       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 02:56:37.998424       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 02:56:37.998491       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 02:56:37.998558       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 02:56:37.998623       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 02:56:37.998689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 02:56:37.998738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 02:56:38.000360       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:56:38.000486       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 02:56:38.002109       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 02:56:38.840276       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 02:56:38.903980       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 02:56:38.912303       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 02:56:38.972949       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 02:56:39.012370       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0817 02:56:39.384997       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 02:51:25 UTC, end at Tue 2021-08-17 02:59:04 UTC. --
	Aug 17 02:56:59 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:56:59.225876    4621 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 02:56:59 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:56:59.226009    4621 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 02:56:59 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:56:59.226457    4621 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d7z6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Pro
be{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,
VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-67mmz_kube-system(d68b6163-f479-44ce-b297-206cc3375f8f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
	Aug 17 02:56:59 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:56:59.226625    4621 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-7c784ccb57-67mmz" podUID=d68b6163-f479-44ce-b297-206cc3375f8f
	Aug 17 02:56:59 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:56:59.425842    4621 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podb4483bc5-0558-4d83-96e9-b61e6cb235ae\": RecentStats: unable to find data in memory cache]"
	Aug 17 02:57:00 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:00.210696    4621 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-67mmz" podUID=d68b6163-f479-44ce-b297-206cc3375f8f
	Aug 17 02:57:05 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: I0817 02:57:05.220323    4621 scope.go:111] "RemoveContainer" containerID="266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2"
	Aug 17 02:57:06 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: W0817 02:57:06.008472    4621 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod1314b7d4-1f3d-489b-81c9-9e21210da53e/266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2 WatchSource:0}: task 266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2 not found: not found
	Aug 17 02:57:06 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: I0817 02:57:06.223330    4621 scope.go:111] "RemoveContainer" containerID="266030b21b122f1bb55db40873f9c4cdee1729c4d612ca294439f60bbe2f17a2"
	Aug 17 02:57:06 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: I0817 02:57:06.223640    4621 scope.go:111] "RemoveContainer" containerID="49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37"
	Aug 17 02:57:06 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:06.223943    4621 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-twxcq_kubernetes-dashboard(1314b7d4-1f3d-489b-81c9-9e21210da53e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-twxcq" podUID=1314b7d4-1f3d-489b-81c9-9e21210da53e
	Aug 17 02:57:07 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: I0817 02:57:07.226304    4621 scope.go:111] "RemoveContainer" containerID="49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37"
	Aug 17 02:57:07 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:07.226576    4621 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-twxcq_kubernetes-dashboard(1314b7d4-1f3d-489b-81c9-9e21210da53e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-twxcq" podUID=1314b7d4-1f3d-489b-81c9-9e21210da53e
	Aug 17 02:57:07 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: W0817 02:57:07.513355    4621 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod1314b7d4-1f3d-489b-81c9-9e21210da53e/49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37 WatchSource:0}: task 49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37 not found: not found
	Aug 17 02:57:09 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:09.500390    4621 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podb4483bc5-0558-4d83-96e9-b61e6cb235ae\": RecentStats: unable to find data in memory cache]"
	Aug 17 02:57:12 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: I0817 02:57:12.123484    4621 scope.go:111] "RemoveContainer" containerID="49c1c104eec150313703a7b97de0956b654765b020dea9ece8579cddb1b1ec37"
	Aug 17 02:57:12 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:12.123808    4621 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-twxcq_kubernetes-dashboard(1314b7d4-1f3d-489b-81c9-9e21210da53e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-twxcq" podUID=1314b7d4-1f3d-489b-81c9-9e21210da53e
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:15.142072    4621 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:15.142113    4621 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:15.142211    4621 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d7z6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Pro
be{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,
VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-67mmz_kube-system(d68b6163-f479-44ce-b297-206cc3375f8f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: E0817 02:57:15.142253    4621 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-7c784ccb57-67mmz" podUID=d68b6163-f479-44ce-b297-206cc3375f8f
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 kubelet[4621]: I0817 02:57:15.917743    4621 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 systemd[1]: kubelet.service: Succeeded.
	Aug 17 02:57:15 default-k8s-different-port-20210817024852-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [c59361d49c6ac6cc48475acf060507cb39bc30b5b8d890ddb8b5daf044d97e18] <==
	* 2021/08/17 02:57:00 Starting overwatch
	2021/08/17 02:57:00 Using namespace: kubernetes-dashboard
	2021/08/17 02:57:00 Using in-cluster config to connect to apiserver
	2021/08/17 02:57:00 Using secret token for csrf signing
	2021/08/17 02:57:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/17 02:57:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/17 02:57:00 Successful initial request to the apiserver, version: v1.21.3
	2021/08/17 02:57:00 Generating JWE encryption key
	2021/08/17 02:57:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/17 02:57:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/17 02:57:00 Initializing JWE encryption key from synchronized object
	2021/08/17 02:57:00 Creating in-cluster Sidecar client
	2021/08/17 02:57:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/17 02:57:00 Serving insecurely on HTTP port: 9090
	2021/08/17 02:57:55 Metric client health check failed: an error on the server ("unknown") has prevented the request from succeeding (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [69ead03de4d0c3f044479b1d6eb598eaad3213444757ddb1e6ed5ff90f671ae8] <==
	* 	/usr/local/go/src/sync/cond.go:56 +0xb8
	k8s.io/client-go/util/workqueue.(*Type).Get(0x400047d680, 0x0, 0x0, 0x1c200)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x84
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0x40000df680, 0x1298cd0, 0x40002faa80, 0x40000e6d20)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x34
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x54
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x40001e6d20)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x64
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40001e6d20, 0x1267368, 0x40001ecd80, 0x1, 0x40000e6360)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x74
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40001e6d20, 0x3b9aca00, 0x0, 0x1, 0x40000e6360)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x88
	k8s.io/apimachinery/pkg/util/wait.Until(0x40001e6d20, 0x3b9aca00, 0x40000e6360)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x48
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x308
	
	goroutine 108 [runnable]:
	k8s.io/client-go/tools/record.(*recorderImpl).generateEvent.func1(0x40000d9d00, 0x40003a0280)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341
	created by k8s.io/client-go/tools/record.(*recorderImpl).generateEvent
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341 +0x31c
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 02:59:04.585633 1699180 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (109.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0817 03:02:57.929821 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0817 03:03:14.884142 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0817 03:03:31.847389 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0817 03:03:39.375524 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0817 03:05:55.534893 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0817 03:06:23.215946 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0817 03:08:14.883773 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0817 03:08:31.847658 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0817 03:10:55.534900 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
start_stop_delete_test.go:247: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210817024805-1554185 -n old-k8s-version-20210817024805-1554185
start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210817024805-1554185 -n old-k8s-version-20210817024805-1554185: exit status 2 (278.336112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:247: status error: exit status 2 (may be ok)
start_stop_delete_test.go:247: "old-k8s-version-20210817024805-1554185" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:248: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210817024805-1554185
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210817024805-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29",
	        "Created": "2021-08-17T02:48:07.556948774Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1683873,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T02:50:51.260024317Z",
	            "FinishedAt": "2021-08-17T02:50:50.057096311Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29/hostname",
	        "HostsPath": "/var/lib/docker/containers/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29/hosts",
	        "LogPath": "/var/lib/docker/containers/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29-json.log",
	        "Name": "/old-k8s-version-20210817024805-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210817024805-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210817024805-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/236bd68e5aeeab8c38058476cfda09686a3fcf6be0c71c5ac7a1ca8635135a12-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/236bd68e5aeeab8c38058476cfda09686a3fcf6be0c71c5ac7a1ca8635135a12/merged",
	                "UpperDir": "/var/lib/docker/overlay2/236bd68e5aeeab8c38058476cfda09686a3fcf6be0c71c5ac7a1ca8635135a12/diff",
	                "WorkDir": "/var/lib/docker/overlay2/236bd68e5aeeab8c38058476cfda09686a3fcf6be0c71c5ac7a1ca8635135a12/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210817024805-1554185",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210817024805-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210817024805-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210817024805-1554185",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210817024805-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8db859a5f76fa1e2614ca4a38811cf6cdc70c3b63b0f36c6d5b6de8b99796396",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50467"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50464"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50466"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50465"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8db859a5f76f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210817024805-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c8b9fbcd517c",
	                        "old-k8s-version-20210817024805-1554185"
	                    ],
	                    "NetworkID": "9aefabdb2d1d911a23f12e9e262da9d968a8cfa23ed9a2191472a782b604d2a8",
	                    "EndpointID": "1f6b1ef1bd2c282d73335e7da0951a5c768f124f1509e6b7cd10bfc8e555b194",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210817024805-1554185 -n old-k8s-version-20210817024805-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210817024805-1554185 -n old-k8s-version-20210817024805-1554185: exit status 2 (283.574795ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-20210817024805-1554185 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 -p old-k8s-version-20210817024805-1554185 logs -n 25: exit status 110 (684.578616ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                      Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                                | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:30 UTC | Tue, 17 Aug 2021 02:50:50 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:50 UTC | Tue, 17 Aug 2021 02:50:50 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:52 UTC | Tue, 17 Aug 2021 02:50:54 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:02 UTC | Tue, 17 Aug 2021 02:51:03 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:03 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:57:04 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:57:15 UTC | Tue, 17 Aug 2021 02:57:15 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:05 UTC | Tue, 17 Aug 2021 02:59:08 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:08 UTC | Tue, 17 Aug 2021 02:59:08 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:08 UTC | Tue, 17 Aug 2021 03:01:12 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:21 UTC | Tue, 17 Aug 2021 03:01:22 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:22 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:07:27 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:38 UTC | Tue, 17 Aug 2021 03:07:38 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	| -p      | embed-certs-20210817025908-1554185                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:40 UTC | Tue, 17 Aug 2021 03:07:41 UTC |
	|         | logs -n 25                                        |                                                   |         |         |                               |                               |
	| -p      | embed-certs-20210817025908-1554185                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:42 UTC | Tue, 17 Aug 2021 03:07:43 UTC |
	|         | logs -n 25                                        |                                                   |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:44 UTC | Tue, 17 Aug 2021 03:07:47 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:47 UTC | Tue, 17 Aug 2021 03:07:48 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210817030748-1554185      | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:48 UTC | Tue, 17 Aug 2021 03:07:48 UTC |
	|         | disable-driver-mounts-20210817030748-1554185      |                                                   |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:48 UTC | Tue, 17 Aug 2021 03:09:14 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:24 UTC | Tue, 17 Aug 2021 03:09:24 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:25 UTC | Tue, 17 Aug 2021 03:09:45 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:45 UTC | Tue, 17 Aug 2021 03:09:45 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 03:09:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 03:09:45.595717 1734845 out.go:298] Setting OutFile to fd 1 ...
	I0817 03:09:45.595882 1734845 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:09:45.595892 1734845 out.go:311] Setting ErrFile to fd 2...
	I0817 03:09:45.595896 1734845 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:09:45.596029 1734845 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 03:09:45.596261 1734845 out.go:305] Setting JSON to false
	I0817 03:09:45.597078 1734845 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39124,"bootTime":1629130662,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 03:09:45.597149 1734845 start.go:121] virtualization:  
	I0817 03:09:45.599691 1734845 out.go:177] * [no-preload-20210817030748-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 03:09:45.602314 1734845 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 03:09:45.601227 1734845 notify.go:169] Checking for updates...
	I0817 03:09:45.604506 1734845 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:09:45.606220 1734845 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 03:09:45.607782 1734845 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 03:09:45.608182 1734845 config.go:177] Loaded profile config "no-preload-20210817030748-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:09:45.608622 1734845 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 03:09:45.645796 1734845 docker.go:132] docker version: linux-20.10.8
	I0817 03:09:45.645869 1734845 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:09:45.761101 1734845 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:09:45.693503398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 03:09:45.761245 1734845 docker.go:244] overlay module found
	I0817 03:09:45.763624 1734845 out.go:177] * Using the docker driver based on existing profile
	I0817 03:09:45.763643 1734845 start.go:278] selected driver: docker
	I0817 03:09:45.763649 1734845 start.go:751] validating driver "docker" against &{Name:no-preload-20210817030748-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817030748-1554185 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHo
stTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:09:45.763770 1734845 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 03:09:45.763808 1734845 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:09:45.763823 1734845 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:09:45.765334 1734845 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:09:45.765622 1734845 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:09:45.879144 1734845 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:09:45.801060075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	W0817 03:09:45.879289 1734845 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:09:45.879303 1734845 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:09:45.881509 1734845 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:09:45.881598 1734845 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 03:09:45.881618 1734845 cni.go:93] Creating CNI manager for ""
	I0817 03:09:45.881625 1734845 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:09:45.881634 1734845 start_flags.go:277] config:
	{Name:no-preload-20210817030748-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817030748-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Multi
NodeRequested:false ExtraDisks:0}
	I0817 03:09:45.883980 1734845 out.go:177] * Starting control plane node no-preload-20210817030748-1554185 in cluster no-preload-20210817030748-1554185
	I0817 03:09:45.884009 1734845 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 03:09:45.885867 1734845 out.go:177] * Pulling base image ...
	I0817 03:09:45.885887 1734845 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0817 03:09:45.886004 1734845 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/config.json ...
	I0817 03:09:45.886270 1734845 cache.go:108] acquiring lock: {Name:mk632f6e0db9416813fd07fccbb58335b8e59d21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886405 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0817 03:09:45.886419 1734845 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 157.077µs
	I0817 03:09:45.886429 1734845 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0817 03:09:45.886443 1734845 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 03:09:45.886608 1734845 cache.go:108] acquiring lock: {Name:mk4fc0e92492b47d614457da59bc6dab952f8b05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886684 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0817 03:09:45.886696 1734845 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 92.225µs
	I0817 03:09:45.886705 1734845 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0817 03:09:45.886719 1734845 cache.go:108] acquiring lock: {Name:mk6dba5734dfeaf6d9d4511e98f054cac0439cfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886771 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0817 03:09:45.886780 1734845 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 62.432µs
	I0817 03:09:45.886790 1734845 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0817 03:09:45.886801 1734845 cache.go:108] acquiring lock: {Name:mkacaa9736949fc5d0494bb1d5c3531771bb3ea8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886855 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0817 03:09:45.886864 1734845 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 64.738µs
	I0817 03:09:45.886873 1734845 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0817 03:09:45.886885 1734845 cache.go:108] acquiring lock: {Name:mkf7cd9af6d882fda3a954c4eb39d82dc77cd0d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886917 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0817 03:09:45.886924 1734845 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 41.107µs
	I0817 03:09:45.886932 1734845 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0817 03:09:45.886942 1734845 cache.go:108] acquiring lock: {Name:mkeec948dbb922c159c4fc1af8656d60fa14d5a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886975 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0817 03:09:45.886984 1734845 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 42.682µs
	I0817 03:09:45.886994 1734845 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0817 03:09:45.887005 1734845 cache.go:108] acquiring lock: {Name:mkb04986d0796ebd5c4c0669e3d06018c5856bea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.887038 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0817 03:09:45.887045 1734845 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 41.862µs
	I0817 03:09:45.887053 1734845 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0817 03:09:45.887063 1734845 cache.go:108] acquiring lock: {Name:mk79883006bb65c2c14816b6b80621971bab0e63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.887095 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0817 03:09:45.887102 1734845 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 40.5µs
	I0817 03:09:45.887110 1734845 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0817 03:09:45.887121 1734845 cache.go:108] acquiring lock: {Name:mk9f3113ef4c19ec91ec377b2f94212c471844e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.887153 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0817 03:09:45.887160 1734845 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 40.606µs
	I0817 03:09:45.887170 1734845 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0817 03:09:45.887180 1734845 cache.go:108] acquiring lock: {Name:mk17550e76c320cd5e7ed26cfb8c625219e409db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.887221 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0817 03:09:45.887229 1734845 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 49.772µs
	I0817 03:09:45.887241 1734845 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0817 03:09:45.887246 1734845 cache.go:88] Successfully saved all images to host disk.
	I0817 03:09:45.961928 1734845 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 03:09:45.961949 1734845 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 03:09:45.961966 1734845 cache.go:205] Successfully downloaded all kic artifacts
	I0817 03:09:45.962001 1734845 start.go:313] acquiring machines lock for no-preload-20210817030748-1554185: {Name:mkb71c7d4561b567efc566d76b68a021481de41c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.962079 1734845 start.go:317] acquired machines lock for "no-preload-20210817030748-1554185" in 63.121µs
	I0817 03:09:45.962097 1734845 start.go:93] Skipping create...Using existing machine configuration
	I0817 03:09:45.962102 1734845 fix.go:55] fixHost starting: 
	I0817 03:09:45.962404 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:09:46.008511 1734845 fix.go:108] recreateIfNeeded on no-preload-20210817030748-1554185: state=Stopped err=<nil>
	W0817 03:09:46.008543 1734845 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 03:09:46.010730 1734845 out.go:177] * Restarting existing docker container for "no-preload-20210817030748-1554185" ...
	I0817 03:09:46.010790 1734845 cli_runner.go:115] Run: docker start no-preload-20210817030748-1554185
	I0817 03:09:46.387790 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:09:46.422718 1734845 kic.go:420] container "no-preload-20210817030748-1554185" state is running.
	I0817 03:09:46.423270 1734845 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210817030748-1554185
	I0817 03:09:46.455024 1734845 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/config.json ...
	I0817 03:09:46.455190 1734845 machine.go:88] provisioning docker machine ...
	I0817 03:09:46.455203 1734845 ubuntu.go:169] provisioning hostname "no-preload-20210817030748-1554185"
	I0817 03:09:46.455245 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:46.490850 1734845 main.go:130] libmachine: Using SSH client type: native
	I0817 03:09:46.491023 1734845 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50493 <nil> <nil>}
	I0817 03:09:46.491052 1734845 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210817030748-1554185 && echo "no-preload-20210817030748-1554185" | sudo tee /etc/hostname
	I0817 03:09:46.491652 1734845 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52816->127.0.0.1:50493: read: connection reset by peer
	I0817 03:09:49.617716 1734845 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210817030748-1554185
	
	I0817 03:09:49.617784 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:49.649149 1734845 main.go:130] libmachine: Using SSH client type: native
	I0817 03:09:49.649317 1734845 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50493 <nil> <nil>}
	I0817 03:09:49.649345 1734845 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210817030748-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210817030748-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210817030748-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 03:09:49.761988 1734845 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 03:09:49.762013 1734845 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 03:09:49.762044 1734845 ubuntu.go:177] setting up certificates
	I0817 03:09:49.762053 1734845 provision.go:83] configureAuth start
	I0817 03:09:49.762113 1734845 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210817030748-1554185
	I0817 03:09:49.805225 1734845 provision.go:138] copyHostCerts
	I0817 03:09:49.805278 1734845 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 03:09:49.805285 1734845 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 03:09:49.805341 1734845 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 03:09:49.805411 1734845 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 03:09:49.805418 1734845 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 03:09:49.805438 1734845 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 03:09:49.805482 1734845 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 03:09:49.805486 1734845 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 03:09:49.805505 1734845 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 03:09:49.805539 1734845 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210817030748-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210817030748-1554185]
	I0817 03:09:50.088826 1734845 provision.go:172] copyRemoteCerts
	I0817 03:09:50.088904 1734845 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 03:09:50.088957 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.120178 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.204693 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 03:09:50.219811 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0817 03:09:50.234847 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 03:09:50.249568 1734845 provision.go:86] duration metric: configureAuth took 487.504363ms
	I0817 03:09:50.249586 1734845 ubuntu.go:193] setting minikube options for container-runtime
	I0817 03:09:50.249747 1734845 config.go:177] Loaded profile config "no-preload-20210817030748-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:09:50.249761 1734845 machine.go:91] provisioned docker machine in 3.794563903s
	I0817 03:09:50.249768 1734845 start.go:267] post-start starting for "no-preload-20210817030748-1554185" (driver="docker")
	I0817 03:09:50.249775 1734845 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 03:09:50.249819 1734845 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 03:09:50.249855 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.280274 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.364600 1734845 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 03:09:50.366851 1734845 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 03:09:50.366874 1734845 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 03:09:50.366885 1734845 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 03:09:50.366893 1734845 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 03:09:50.366902 1734845 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 03:09:50.366958 1734845 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 03:09:50.367038 1734845 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 03:09:50.367128 1734845 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 03:09:50.372464 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:09:50.386603 1734845 start.go:270] post-start completed in 136.82351ms
	I0817 03:09:50.386665 1734845 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 03:09:50.386708 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.416929 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.499112 1734845 fix.go:57] fixHost completed within 4.537005735s
	I0817 03:09:50.499156 1734845 start.go:80] releasing machines lock for "no-preload-20210817030748-1554185", held for 4.537067921s
	I0817 03:09:50.499234 1734845 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210817030748-1554185
	I0817 03:09:50.528911 1734845 ssh_runner.go:149] Run: systemctl --version
	I0817 03:09:50.528958 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.529174 1734845 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 03:09:50.529224 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.570479 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.581923 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.789210 1734845 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 03:09:50.807188 1734845 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 03:09:50.817365 1734845 docker.go:153] disabling docker service ...
	I0817 03:09:50.817403 1734845 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 03:09:50.828441 1734845 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 03:09:50.838006 1734845 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 03:09:50.937592 1734845 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 03:09:51.044840 1734845 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 03:09:51.053358 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 03:09:51.064501 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 03:09:51.075958 1734845 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 03:09:51.081433 1734845 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 03:09:51.086713 1734845 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 03:09:51.172289 1734845 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 03:09:51.291561 1734845 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 03:09:51.291621 1734845 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 03:09:51.295154 1734845 start.go:413] Will wait 60s for crictl version
	I0817 03:09:51.295201 1734845 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:09:51.318238 1734845 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T03:09:51Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 03:10:02.365028 1734845 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:10:02.391854 1734845 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 03:10:02.391910 1734845 ssh_runner.go:149] Run: containerd --version
	I0817 03:10:02.413907 1734845 ssh_runner.go:149] Run: containerd --version
	I0817 03:10:02.437726 1734845 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0817 03:10:02.437797 1734845 cli_runner.go:115] Run: docker network inspect no-preload-20210817030748-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 03:10:02.468777 1734845 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 03:10:02.471802 1734845 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:10:02.480305 1734845 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0817 03:10:02.480343 1734845 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:10:02.504624 1734845 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:10:02.504642 1734845 cache_images.go:74] Images are preloaded, skipping loading
	I0817 03:10:02.504681 1734845 ssh_runner.go:149] Run: sudo crictl info
	I0817 03:10:02.525951 1734845 cni.go:93] Creating CNI manager for ""
	I0817 03:10:02.525972 1734845 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:10:02.525982 1734845 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 03:10:02.525994 1734845 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210817030748-1554185 NodeName:no-preload-20210817030748-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupf
s ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 03:10:02.526120 1734845 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20210817030748-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 03:10:02.526201 1734845 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20210817030748-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817030748-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 03:10:02.526251 1734845 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0817 03:10:02.532165 1734845 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 03:10:02.532209 1734845 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 03:10:02.538076 1734845 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (583 bytes)
	I0817 03:10:02.549244 1734845 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 03:10:02.559733 1734845 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2088 bytes)
	I0817 03:10:02.570496 1734845 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 03:10:02.572893 1734845 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:10:02.580287 1734845 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185 for IP: 192.168.49.2
	I0817 03:10:02.580356 1734845 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 03:10:02.580376 1734845 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 03:10:02.580418 1734845 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.key
	I0817 03:10:02.580452 1734845 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/apiserver.key.dd3b5fb2
	I0817 03:10:02.580472 1734845 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/proxy-client.key
	I0817 03:10:02.580563 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 03:10:02.580621 1734845 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 03:10:02.580635 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 03:10:02.580658 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 03:10:02.580690 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 03:10:02.580716 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 03:10:02.580762 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:10:02.581815 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 03:10:02.596113 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 03:10:02.610196 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 03:10:02.624534 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 03:10:02.638602 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 03:10:02.652883 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 03:10:02.667765 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 03:10:02.682078 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 03:10:02.696076 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 03:10:02.710132 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 03:10:02.724104 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 03:10:02.741265 1734845 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 03:10:02.753135 1734845 ssh_runner.go:149] Run: openssl version
	I0817 03:10:02.758662 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 03:10:02.766005 1734845 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 03:10:02.769984 1734845 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 03:10:02.770058 1734845 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 03:10:02.774787 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 03:10:02.782462 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 03:10:02.788967 1734845 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 03:10:02.791943 1734845 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 03:10:02.792016 1734845 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 03:10:02.796538 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 03:10:02.803506 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 03:10:02.810234 1734845 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:10:02.813269 1734845 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:10:02.813341 1734845 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:10:02.817670 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 03:10:02.826139 1734845 kubeadm.go:390] StartCluster: {Name:no-preload-20210817030748-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817030748-1554185 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Sched
uledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:10:02.826272 1734845 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 03:10:02.829083 1734845 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:10:02.860893 1734845 cri.go:76] found id: "5fd95b123978dca4b6ab24c1dd9e6d59c36548c5b94e6654ec27ef56af4fe88d"
	I0817 03:10:02.860910 1734845 cri.go:76] found id: "9c060387fd4d3d313109509ecd44d3a743c106457f9d93241bffab2e2b93f919"
	I0817 03:10:02.860916 1734845 cri.go:76] found id: "36c454d274c3ef067156d2fbbd72f2376c54830a080725345eefa31afc181bd5"
	I0817 03:10:02.860920 1734845 cri.go:76] found id: "d54ec4ef3496e2491faab4264d90e872e94c939c42dd24df1760f57387e81499"
	I0817 03:10:02.860925 1734845 cri.go:76] found id: "cb3ffc6bc33b5a46ec1025b4e24ecb9fc89717bd91aada6c0e170078704aaf60"
	I0817 03:10:02.860931 1734845 cri.go:76] found id: "761d25293d436e790a2b65ed62a103031486ccf71492f2c69cc59f70713dea63"
	I0817 03:10:02.860938 1734845 cri.go:76] found id: "55af062ccd49e046278cc903ed18f62c05456d1ce2e71712fedbb3d9785dd68f"
	I0817 03:10:02.860943 1734845 cri.go:76] found id: "4432107cb17de93765fa65b6065434f94d55510e3066cc7a7919426783139edc"
	I0817 03:10:02.860947 1734845 cri.go:76] found id: ""
	I0817 03:10:02.860986 1734845 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:10:02.876381 1734845 cri.go:103] JSON = null
	W0817 03:10:02.876425 1734845 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0817 03:10:02.876467 1734845 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 03:10:02.883651 1734845 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 03:10:02.883665 1734845 kubeadm.go:600] restartCluster start
	I0817 03:10:02.883716 1734845 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 03:10:02.891213 1734845 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:02.892046 1734845 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210817030748-1554185" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:10:02.892308 1734845 kubeconfig.go:128] "no-preload-20210817030748-1554185" context is missing from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0817 03:10:02.892839 1734845 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:10:02.895523 1734845 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 03:10:02.902064 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:02.902116 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:02.911081 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.111403 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.111554 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.120856 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.312115 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.312186 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.321126 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.511190 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.511278 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.520092 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.711390 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.711449 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.720190 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.911517 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.911590 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.927188 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.111420 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.111475 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.120528 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.311784 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.311847 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.320515 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.511725 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.511806 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.520331 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.711571 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.711653 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.720228 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.911498 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.911603 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.922093 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.111424 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.111494 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.120830 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.312038 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.312096 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.320765 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.512027 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.512071 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.520648 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.712032 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.712087 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.720826 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.911883 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.911985 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.923085 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.923128 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.923192 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.932466 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.932514 1734845 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0817 03:10:05.932533 1734845 kubeadm.go:1032] stopping kube-system containers ...
	I0817 03:10:05.932552 1734845 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 03:10:05.932619 1734845 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:10:05.959487 1734845 cri.go:76] found id: "5fd95b123978dca4b6ab24c1dd9e6d59c36548c5b94e6654ec27ef56af4fe88d"
	I0817 03:10:05.959505 1734845 cri.go:76] found id: "9c060387fd4d3d313109509ecd44d3a743c106457f9d93241bffab2e2b93f919"
	I0817 03:10:05.959510 1734845 cri.go:76] found id: "36c454d274c3ef067156d2fbbd72f2376c54830a080725345eefa31afc181bd5"
	I0817 03:10:05.959517 1734845 cri.go:76] found id: "d54ec4ef3496e2491faab4264d90e872e94c939c42dd24df1760f57387e81499"
	I0817 03:10:05.959521 1734845 cri.go:76] found id: "cb3ffc6bc33b5a46ec1025b4e24ecb9fc89717bd91aada6c0e170078704aaf60"
	I0817 03:10:05.959526 1734845 cri.go:76] found id: "761d25293d436e790a2b65ed62a103031486ccf71492f2c69cc59f70713dea63"
	I0817 03:10:05.959534 1734845 cri.go:76] found id: "55af062ccd49e046278cc903ed18f62c05456d1ce2e71712fedbb3d9785dd68f"
	I0817 03:10:05.959539 1734845 cri.go:76] found id: "4432107cb17de93765fa65b6065434f94d55510e3066cc7a7919426783139edc"
	I0817 03:10:05.959548 1734845 cri.go:76] found id: ""
	I0817 03:10:05.959553 1734845 cri.go:221] Stopping containers: [5fd95b123978dca4b6ab24c1dd9e6d59c36548c5b94e6654ec27ef56af4fe88d 9c060387fd4d3d313109509ecd44d3a743c106457f9d93241bffab2e2b93f919 36c454d274c3ef067156d2fbbd72f2376c54830a080725345eefa31afc181bd5 d54ec4ef3496e2491faab4264d90e872e94c939c42dd24df1760f57387e81499 cb3ffc6bc33b5a46ec1025b4e24ecb9fc89717bd91aada6c0e170078704aaf60 761d25293d436e790a2b65ed62a103031486ccf71492f2c69cc59f70713dea63 55af062ccd49e046278cc903ed18f62c05456d1ce2e71712fedbb3d9785dd68f 4432107cb17de93765fa65b6065434f94d55510e3066cc7a7919426783139edc]
	I0817 03:10:05.959598 1734845 ssh_runner.go:149] Run: which crictl
	I0817 03:10:05.962096 1734845 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 5fd95b123978dca4b6ab24c1dd9e6d59c36548c5b94e6654ec27ef56af4fe88d 9c060387fd4d3d313109509ecd44d3a743c106457f9d93241bffab2e2b93f919 36c454d274c3ef067156d2fbbd72f2376c54830a080725345eefa31afc181bd5 d54ec4ef3496e2491faab4264d90e872e94c939c42dd24df1760f57387e81499 cb3ffc6bc33b5a46ec1025b4e24ecb9fc89717bd91aada6c0e170078704aaf60 761d25293d436e790a2b65ed62a103031486ccf71492f2c69cc59f70713dea63 55af062ccd49e046278cc903ed18f62c05456d1ce2e71712fedbb3d9785dd68f 4432107cb17de93765fa65b6065434f94d55510e3066cc7a7919426783139edc
	I0817 03:10:05.985016 1734845 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 03:10:05.994165 1734845 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 03:10:06.000009 1734845 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 17 03:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 17 03:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2071 Aug 17 03:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 17 03:08 /etc/kubernetes/scheduler.conf
	
	I0817 03:10:06.000052 1734845 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0817 03:10:06.005722 1734845 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0817 03:10:06.011325 1734845 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0817 03:10:06.016859 1734845 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:06.016897 1734845 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 03:10:06.022694 1734845 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0817 03:10:06.028263 1734845 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:06.028305 1734845 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 03:10:06.034042 1734845 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 03:10:06.039704 1734845 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 03:10:06.039723 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:06.082382 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:08.615294 1734845 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.532882729s)
	I0817 03:10:08.615317 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:08.767910 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:08.915113 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:08.981033 1734845 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:10:08.981090 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:09.491936 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:09.991932 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:10.492229 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:10.991690 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:11.491572 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:11.991560 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:12.491549 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:12.992465 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:13.491498 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:13.991910 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:14.492177 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:14.991942 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:15.492364 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:15.511549 1734845 api_server.go:70] duration metric: took 6.530524968s to wait for apiserver process to appear ...
	I0817 03:10:15.511565 1734845 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:10:15.511573 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:20.514891 1734845 api_server.go:255] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 03:10:21.015169 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:21.565807 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 03:10:21.565826 1734845 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 03:10:22.015051 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:22.041924 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:10:22.041942 1734845 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:10:22.515122 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:22.524921 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:10:22.524982 1734845 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:10:23.015376 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:23.031209 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:10:23.058291 1734845 api_server.go:139] control plane version: v1.22.0-rc.0
	I0817 03:10:23.058308 1734845 api_server.go:129] duration metric: took 7.546737318s to wait for apiserver health ...
	I0817 03:10:23.058317 1734845 cni.go:93] Creating CNI manager for ""
	I0817 03:10:23.058324 1734845 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:10:23.061243 1734845 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 03:10:23.061294 1734845 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 03:10:23.065558 1734845 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0817 03:10:23.065571 1734845 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 03:10:23.111700 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 03:10:23.541478 1734845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:10:23.554265 1734845 system_pods.go:59] 9 kube-system pods found
	I0817 03:10:23.554329 1734845 system_pods.go:61] "coredns-78fcd69978-nxgmv" [e5cfb032-8c57-472c-8433-778c79a640b2] Running
	I0817 03:10:23.554348 1734845 system_pods.go:61] "etcd-no-preload-20210817030748-1554185" [a8887420-4d93-40e6-98dc-1983e6a39b00] Running
	I0817 03:10:23.554366 1734845 system_pods.go:61] "kindnet-w55nn" [b64f1d5a-7c2e-44a2-bb39-0461eb1fc34f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 03:10:23.554381 1734845 system_pods.go:61] "kube-apiserver-no-preload-20210817030748-1554185" [e4ac61de-aae2-40be-8dd1-8de97f9fbbf0] Running
	I0817 03:10:23.554399 1734845 system_pods.go:61] "kube-controller-manager-no-preload-20210817030748-1554185" [80d8992e-cee6-4d6c-9a3c-02efe38509c3] Running
	I0817 03:10:23.554425 1734845 system_pods.go:61] "kube-proxy-2wcnd" [98d1ffc4-ef5d-4686-85c5-e6c7c706a5d0] Running
	I0817 03:10:23.554446 1734845 system_pods.go:61] "kube-scheduler-no-preload-20210817030748-1554185" [da680647-558b-4c7f-9ea4-0493359ec794] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 03:10:23.554463 1734845 system_pods.go:61] "metrics-server-7c784ccb57-g4znl" [f28ee3e1-229f-43f7-a493-4ad334a03e12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:10:23.554479 1734845 system_pods.go:61] "storage-provisioner" [c8fcde2f-327e-462a-8883-25cd16bd9a0f] Running
	I0817 03:10:23.554495 1734845 system_pods.go:74] duration metric: took 13.002435ms to wait for pod list to return data ...
	I0817 03:10:23.554512 1734845 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:10:23.558744 1734845 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:10:23.558803 1734845 node_conditions.go:123] node cpu capacity is 2
	I0817 03:10:23.558880 1734845 node_conditions.go:105] duration metric: took 4.351282ms to run NodePressure ...
	I0817 03:10:23.558910 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:23.890429 1734845 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0817 03:10:23.895047 1734845 kubeadm.go:746] kubelet initialised
	I0817 03:10:23.895068 1734845 kubeadm.go:747] duration metric: took 4.62177ms waiting for restarted kubelet to initialise ...
	I0817 03:10:23.895075 1734845 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:10:23.901002 1734845 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:25.915651 1734845 pod_ready.go:102] pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:28.415696 1734845 pod_ready.go:102] pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:30.914925 1734845 pod_ready.go:92] pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:30.914950 1734845 pod_ready.go:81] duration metric: took 7.013913856s waiting for pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:30.914960 1734845 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:31.424098 1734845 pod_ready.go:92] pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:31.424117 1734845 pod_ready.go:81] duration metric: took 509.148838ms waiting for pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:31.424129 1734845 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.436384 1734845 pod_ready.go:92] pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:32.436405 1734845 pod_ready.go:81] duration metric: took 1.012268093s waiting for pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.436416 1734845 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.440968 1734845 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:32.440984 1734845 pod_ready.go:81] duration metric: took 4.56056ms waiting for pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.441001 1734845 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2wcnd" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.445649 1734845 pod_ready.go:92] pod "kube-proxy-2wcnd" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:32.445666 1734845 pod_ready.go:81] duration metric: took 4.656387ms waiting for pod "kube-proxy-2wcnd" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.445674 1734845 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.513148 1734845 pod_ready.go:92] pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:32.513166 1734845 pod_ready.go:81] duration metric: took 67.484919ms waiting for pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.513175 1734845 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:34.918735 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:36.919489 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:39.422799 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:41.992397 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:44.427232 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:46.918669 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:48.918991 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:50.919214 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:53.421151 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:55.918970 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:58.420373 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:00.926327 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:03.424380 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:05.919407 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:08.419192 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:10.419678 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:12.918432 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:14.919796 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:17.418745 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:19.420001 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:21.918548 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:23.919596 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:26.419907 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:28.423199 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 02:50:51 UTC, end at Tue 2021-08-17 03:11:35 UTC. --
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.468623695Z" level=info msg="RemovePodSandbox \"7d53e801511ed07e6fabcb3c88dd69fd2c4ef7c3c028e9e44605be1ffc98ba60\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.491869824Z" level=info msg="StopPodSandbox for \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.491924815Z" level=info msg="Container to stop \"fdb4bd345708970e1d90521f8d81da07a88c79e442c82c7c115a4c1c6ded93a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.491993187Z" level=info msg="TearDown network for sandbox \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\" successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.492004092Z" level=info msg="StopPodSandbox for \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.526895702Z" level=info msg="RemovePodSandbox for \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.552169798Z" level=info msg="RemovePodSandbox \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.579967120Z" level=info msg="StopPodSandbox for \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.580012288Z" level=info msg="Container to stop \"2c48aa387b60234b5845590a62ab0933aef10e3afa1695cc7f5a93e93dc5b0c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.580081120Z" level=info msg="TearDown network for sandbox \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\" successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.580094043Z" level=info msg="StopPodSandbox for \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.609914286Z" level=info msg="RemovePodSandbox for \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.620668256Z" level=info msg="RemovePodSandbox \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.650881963Z" level=info msg="StopPodSandbox for \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.650932834Z" level=info msg="Container to stop \"86ac8067fb1b5139e8f2e23b9daa6b76aa704ec28b4c4cf6d281c7293bc4259d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.650992936Z" level=info msg="TearDown network for sandbox \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\" successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.651004021Z" level=info msg="StopPodSandbox for \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.680816075Z" level=info msg="RemovePodSandbox for \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.694511521Z" level=info msg="RemovePodSandbox \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.724328999Z" level=info msg="StopPodSandbox for \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.724372674Z" level=info msg="Container to stop \"f8b050af48208844c31f77ed2dc4fc25f4633ce187e85801e393aa0fce9c1ce0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.724442655Z" level=info msg="TearDown network for sandbox \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\" successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.724453609Z" level=info msg="StopPodSandbox for \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.749234435Z" level=info msg="RemovePodSandbox for \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.758871386Z" level=info msg="RemovePodSandbox \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> kernel <==
	*  03:11:35 up 10:53,  0 users,  load average: 2.26, 1.79, 1.64
	Linux old-k8s-version-20210817024805-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 02:50:51 UTC, end at Tue 2021-08-17 03:11:35 UTC. --
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.362184   29709 kubelet.go:1806] Starting kubelet main sync loop.
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.362246   29709 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.]
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.362497   29709 volume_manager.go:248] Starting Kubelet Volume Manager
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.366493   29709 desired_state_of_world_populator.go:130] Desired state populator starts to run
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: E0817 03:11:35.391498   29709 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: E0817 03:11:35.391642   29709 controller.go:115] failed to ensure node lease exists, will retry in 200ms, error: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/old-k8s-version-20210817024805-1554185?timeout=10s: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.392201   29709 clientconn.go:440] parsed scheme: "unix"
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.392217   29709 clientconn.go:440] scheme "unix" not registered, fallback to default scheme
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.392247   29709 asm_arm64.s:1128] ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0  <nil>}]
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.392256   29709 clientconn.go:796] ClientConn switching balancer to "pick_first"
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.392300   29709 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x4000920170, CONNECTING
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.392396   29709 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x4000920170, READY
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.464452   29709 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.464497   29709 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: E0817 03:11:35.464965   29709 kubelet.go:2244] node "old-k8s-version-20210817024805-1554185" not found
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.488345   29709 kubelet_node_status.go:72] Attempting to register node old-k8s-version-20210817024805-1554185
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: E0817 03:11:35.514197   29709 kubelet_node_status.go:94] Unable to register node "old-k8s-version-20210817024805-1554185" with API server: Post https://control-plane.minikube.internal:8443/api/v1/nodes: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.571590   29709 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: E0817 03:11:35.572102   29709 kubelet.go:2244] node "old-k8s-version-20210817024805-1554185" not found
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.572478   29709 cpu_manager.go:155] [cpumanager] starting with none policy
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.572561   29709 cpu_manager.go:156] [cpumanager] reconciling every 10s
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: I0817 03:11:35.572617   29709 policy_none.go:42] [cpumanager] none policy: Start
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 kubelet[29709]: F0817 03:11:35.573730   29709 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 17 03:11:35 old-k8s-version-20210817024805-1554185 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 03:11:35.895602 1740188 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-20210817025908-1554185 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-20210817025908-1554185 --alsologtostderr -v=1: exit status 80 (2.009768242s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-20210817025908-1554185 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 03:07:38.303462 1726336 out.go:298] Setting OutFile to fd 1 ...
	I0817 03:07:38.303555 1726336 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:07:38.303567 1726336 out.go:311] Setting ErrFile to fd 2...
	I0817 03:07:38.303571 1726336 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:07:38.303704 1726336 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 03:07:38.303882 1726336 out.go:305] Setting JSON to false
	I0817 03:07:38.303911 1726336 mustload.go:65] Loading cluster: embed-certs-20210817025908-1554185
	I0817 03:07:38.304241 1726336 config.go:177] Loaded profile config "embed-certs-20210817025908-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 03:07:38.304691 1726336 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:07:38.336255 1726336 host.go:66] Checking if "embed-certs-20210817025908-1554185" exists ...
	I0817 03:07:38.336965 1726336 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-20210817025908-1554185 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0817 03:07:38.339175 1726336 out.go:177] * Pausing node embed-certs-20210817025908-1554185 ... 
	I0817 03:07:38.339234 1726336 host.go:66] Checking if "embed-certs-20210817025908-1554185" exists ...
	I0817 03:07:38.339500 1726336 ssh_runner.go:149] Run: systemctl --version
	I0817 03:07:38.339540 1726336 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:07:38.370512 1726336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:07:38.470182 1726336 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:07:38.482961 1726336 pause.go:50] kubelet running: true
	I0817 03:07:38.483049 1726336 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 03:07:38.688015 1726336 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 03:07:38.688112 1726336 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 03:07:38.805695 1726336 cri.go:76] found id: "7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a"
	I0817 03:07:38.805721 1726336 cri.go:76] found id: "20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62"
	I0817 03:07:38.805726 1726336 cri.go:76] found id: "e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955"
	I0817 03:07:38.805731 1726336 cri.go:76] found id: "f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9"
	I0817 03:07:38.805763 1726336 cri.go:76] found id: "feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb"
	I0817 03:07:38.805772 1726336 cri.go:76] found id: "dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf"
	I0817 03:07:38.805776 1726336 cri.go:76] found id: "ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445"
	I0817 03:07:38.805784 1726336 cri.go:76] found id: "293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095"
	I0817 03:07:38.805788 1726336 cri.go:76] found id: "104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a"
	I0817 03:07:38.805797 1726336 cri.go:76] found id: "08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2"
	I0817 03:07:38.805808 1726336 cri.go:76] found id: ""
	I0817 03:07:38.805860 1726336 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:07:38.853573 1726336 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2","pid":5831,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2/rootfs","created":"2021-08-17T03:07:16.656250841Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62","pid":5350,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62","rootfs":"/run/containerd/io.containe
rd.runtime.v2.task/k8s.io/20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62/rootfs","created":"2021-08-17T03:07:13.79495153Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095","pid":4450,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095/rootfs","created":"2021-08-17T03:06:47.29412497Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778"},"own
er":"root"},{"ociVersion":"1.0.2-dev","id":"3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a","pid":4364,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a/rootfs","created":"2021-08-17T03:06:47.14071062Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210817025908-1554185_a908c4e86c4dc10972037b6ad13dcec4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9","pid":5662,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159b
ec76a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9/rootfs","created":"2021-08-17T03:07:16.17151077Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-48wrs_1573fd26-713e-4757-9c30-cdb6f8181a96"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2","pid":5904,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2/rootfs","created":"2021-08-17T03:07:16.944591119Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-i
d":"6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-gdlf6_2ecf1109-a9f9-4504-a5ac-e2dd767aa611"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb","pid":4373,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb/rootfs","created":"2021-08-17T03:06:47.152959534Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210817025908-1554185_8e3bfc73bc6ac50cb0f18d4529d2797a"},"owner":"root"},{"ociVersion":"1.0.2-dev
","id":"7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096","pid":4954,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096/rootfs","created":"2021-08-17T03:07:12.714233212Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-cjs8q_9d9df0cd-9f52-42a0-80dc-0d78009fd46c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778","pid":4327,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778","rootfs":"/run/containerd/io.containerd.runtime.v2.task/
k8s.io/79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778/rootfs","created":"2021-08-17T03:06:47.108496999Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210817025908-1554185_4971afa8502bf87c2dd20a55295f82dc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a","pid":5656,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a/rootfs","created":"2021-08-17T03:07:16.182919652Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kuber
netes.cri.sandbox-id":"92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a","pid":4390,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a/rootfs","created":"2021-08-17T03:06:47.185430974Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210817025908-1554185_4eda981fb5c431a16a0ca59222d4300c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9","pid":5571,"status":"running","bundle":"/run/containerd/
io.containerd.runtime.v2.task/k8s.io/92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9/rootfs","created":"2021-08-17T03:07:15.914092813Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_09ad25fa-17f2-48b6-b8fc-fe277ad894a1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b","pid":4983,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b/rootfs","created":"2021-08-17T03:07:12.831036353Z","ann
otations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-cnwp8_76ff7eb8-7cd3-45f4-8651-a91c5f883da1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380","pid":5226,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380/rootfs","created":"2021-08-17T03:07:13.544756899Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-cstpw_27d17672-32a4-41f6-829e-7536a182784e"},"owne
r":"root"},{"ociVersion":"1.0.2-dev","id":"d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9","pid":5792,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9/rootfs","created":"2021-08-17T03:07:16.562251606Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-wrgrg_4538c3b1-3b7e-491b-8874-738d8af30420"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf","pid":4539,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27
503fdf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf/rootfs","created":"2021-08-17T03:06:47.387108361Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955","pid":5139,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955/rootfs","created":"2021-08-17T03:07:13.329901749Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cdce4507fb252e15da817ea7ce4ba
7bb054de0e6bc6ce4466e25a1fbe6d2c78b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445","pid":4507,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445/rootfs","created":"2021-08-17T03:06:47.406414064Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9","pid":5148,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8
s.io/f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9/rootfs","created":"2021-08-17T03:07:13.326472181Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb","pid":4509,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb/rootfs","created":"2021-08-17T03:06:47.347464065Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a"},"owner":"root"}]
	I0817 03:07:38.853851 1726336 cri.go:113] list returned 20 containers
	I0817 03:07:38.853866 1726336 cri.go:116] container: {ID:08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2 Status:running}
	I0817 03:07:38.853879 1726336 cri.go:116] container: {ID:20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62 Status:running}
	I0817 03:07:38.853890 1726336 cri.go:116] container: {ID:293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095 Status:running}
	I0817 03:07:38.853895 1726336 cri.go:116] container: {ID:3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a Status:running}
	I0817 03:07:38.853906 1726336 cri.go:118] skipping 3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a - not in ps
	I0817 03:07:38.853911 1726336 cri.go:116] container: {ID:52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9 Status:running}
	I0817 03:07:38.853916 1726336 cri.go:118] skipping 52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9 - not in ps
	I0817 03:07:38.853920 1726336 cri.go:116] container: {ID:6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2 Status:running}
	I0817 03:07:38.853928 1726336 cri.go:118] skipping 6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2 - not in ps
	I0817 03:07:38.853935 1726336 cri.go:116] container: {ID:72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb Status:running}
	I0817 03:07:38.853941 1726336 cri.go:118] skipping 72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb - not in ps
	I0817 03:07:38.853947 1726336 cri.go:116] container: {ID:7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096 Status:running}
	I0817 03:07:38.853953 1726336 cri.go:118] skipping 7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096 - not in ps
	I0817 03:07:38.853958 1726336 cri.go:116] container: {ID:79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778 Status:running}
	I0817 03:07:38.853965 1726336 cri.go:118] skipping 79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778 - not in ps
	I0817 03:07:38.853969 1726336 cri.go:116] container: {ID:7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a Status:running}
	I0817 03:07:38.853976 1726336 cri.go:116] container: {ID:8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a Status:running}
	I0817 03:07:38.853982 1726336 cri.go:118] skipping 8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a - not in ps
	I0817 03:07:38.853991 1726336 cri.go:116] container: {ID:92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9 Status:running}
	I0817 03:07:38.853997 1726336 cri.go:118] skipping 92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9 - not in ps
	I0817 03:07:38.854001 1726336 cri.go:116] container: {ID:cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b Status:running}
	I0817 03:07:38.854010 1726336 cri.go:118] skipping cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b - not in ps
	I0817 03:07:38.854015 1726336 cri.go:116] container: {ID:ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380 Status:running}
	I0817 03:07:38.854023 1726336 cri.go:118] skipping ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380 - not in ps
	I0817 03:07:38.854029 1726336 cri.go:116] container: {ID:d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9 Status:running}
	I0817 03:07:38.854045 1726336 cri.go:118] skipping d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9 - not in ps
	I0817 03:07:38.854049 1726336 cri.go:116] container: {ID:dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf Status:running}
	I0817 03:07:38.854054 1726336 cri.go:116] container: {ID:e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955 Status:running}
	I0817 03:07:38.854060 1726336 cri.go:116] container: {ID:ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445 Status:running}
	I0817 03:07:38.854065 1726336 cri.go:116] container: {ID:f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9 Status:running}
	I0817 03:07:38.854074 1726336 cri.go:116] container: {ID:feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb Status:running}
	I0817 03:07:38.854117 1726336 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2
	I0817 03:07:38.867753 1726336 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2 20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62
	I0817 03:07:38.879746 1726336 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2 20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T03:07:38Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0817 03:07:39.156159 1726336 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:07:39.165246 1726336 pause.go:50] kubelet running: false
	I0817 03:07:39.165319 1726336 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 03:07:39.282325 1726336 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 03:07:39.282391 1726336 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 03:07:39.352216 1726336 cri.go:76] found id: "7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a"
	I0817 03:07:39.352239 1726336 cri.go:76] found id: "20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62"
	I0817 03:07:39.352244 1726336 cri.go:76] found id: "e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955"
	I0817 03:07:39.352248 1726336 cri.go:76] found id: "f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9"
	I0817 03:07:39.352253 1726336 cri.go:76] found id: "feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb"
	I0817 03:07:39.352278 1726336 cri.go:76] found id: "dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf"
	I0817 03:07:39.352289 1726336 cri.go:76] found id: "ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445"
	I0817 03:07:39.352293 1726336 cri.go:76] found id: "293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095"
	I0817 03:07:39.352297 1726336 cri.go:76] found id: "104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a"
	I0817 03:07:39.352305 1726336 cri.go:76] found id: "08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2"
	I0817 03:07:39.352315 1726336 cri.go:76] found id: ""
	I0817 03:07:39.352366 1726336 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:07:39.394473 1726336 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2","pid":5831,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2/rootfs","created":"2021-08-17T03:07:16.656250841Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62","pid":5350,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62","rootfs":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62/rootfs","created":"2021-08-17T03:07:13.79495153Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095","pid":4450,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095/rootfs","created":"2021-08-17T03:06:47.29412497Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778"},"owne
r":"root"},{"ociVersion":"1.0.2-dev","id":"3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a","pid":4364,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a/rootfs","created":"2021-08-17T03:06:47.14071062Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210817025908-1554185_a908c4e86c4dc10972037b6ad13dcec4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9","pid":5662,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159be
c76a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9/rootfs","created":"2021-08-17T03:07:16.17151077Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-48wrs_1573fd26-713e-4757-9c30-cdb6f8181a96"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2","pid":5904,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2/rootfs","created":"2021-08-17T03:07:16.944591119Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id
":"6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-gdlf6_2ecf1109-a9f9-4504-a5ac-e2dd767aa611"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb","pid":4373,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb/rootfs","created":"2021-08-17T03:06:47.152959534Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210817025908-1554185_8e3bfc73bc6ac50cb0f18d4529d2797a"},"owner":"root"},{"ociVersion":"1.0.2-dev"
,"id":"7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096","pid":4954,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096/rootfs","created":"2021-08-17T03:07:12.714233212Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-cjs8q_9d9df0cd-9f52-42a0-80dc-0d78009fd46c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778","pid":4327,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k
8s.io/79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778/rootfs","created":"2021-08-17T03:06:47.108496999Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210817025908-1554185_4971afa8502bf87c2dd20a55295f82dc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a","pid":5656,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a/rootfs","created":"2021-08-17T03:07:16.182919652Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubern
etes.cri.sandbox-id":"92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a","pid":4390,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a/rootfs","created":"2021-08-17T03:06:47.185430974Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210817025908-1554185_4eda981fb5c431a16a0ca59222d4300c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9","pid":5571,"status":"running","bundle":"/run/containerd/i
o.containerd.runtime.v2.task/k8s.io/92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9/rootfs","created":"2021-08-17T03:07:15.914092813Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_09ad25fa-17f2-48b6-b8fc-fe277ad894a1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b","pid":4983,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b/rootfs","created":"2021-08-17T03:07:12.831036353Z","anno
tations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-cnwp8_76ff7eb8-7cd3-45f4-8651-a91c5f883da1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380","pid":5226,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380/rootfs","created":"2021-08-17T03:07:13.544756899Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-cstpw_27d17672-32a4-41f6-829e-7536a182784e"},"owner
":"root"},{"ociVersion":"1.0.2-dev","id":"d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9","pid":5792,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9/rootfs","created":"2021-08-17T03:07:16.562251606Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-wrgrg_4538c3b1-3b7e-491b-8874-738d8af30420"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf","pid":4539,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e275
03fdf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf/rootfs","created":"2021-08-17T03:06:47.387108361Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955","pid":5139,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955/rootfs","created":"2021-08-17T03:07:13.329901749Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cdce4507fb252e15da817ea7ce4ba7
bb054de0e6bc6ce4466e25a1fbe6d2c78b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445","pid":4507,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445/rootfs","created":"2021-08-17T03:06:47.406414064Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9","pid":5148,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s
.io/f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9/rootfs","created":"2021-08-17T03:07:13.326472181Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb","pid":4509,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb/rootfs","created":"2021-08-17T03:06:47.347464065Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a"},"owner":"root"}]
	I0817 03:07:39.394736 1726336 cri.go:113] list returned 20 containers
	I0817 03:07:39.394750 1726336 cri.go:116] container: {ID:08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2 Status:paused}
	I0817 03:07:39.394761 1726336 cri.go:122] skipping {08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2 paused}: state = "paused", want "running"
	I0817 03:07:39.394775 1726336 cri.go:116] container: {ID:20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62 Status:running}
	I0817 03:07:39.394781 1726336 cri.go:116] container: {ID:293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095 Status:running}
	I0817 03:07:39.394788 1726336 cri.go:116] container: {ID:3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a Status:running}
	I0817 03:07:39.394794 1726336 cri.go:118] skipping 3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a - not in ps
	I0817 03:07:39.394803 1726336 cri.go:116] container: {ID:52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9 Status:running}
	I0817 03:07:39.394823 1726336 cri.go:118] skipping 52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9 - not in ps
	I0817 03:07:39.394830 1726336 cri.go:116] container: {ID:6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2 Status:running}
	I0817 03:07:39.394841 1726336 cri.go:118] skipping 6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2 - not in ps
	I0817 03:07:39.394845 1726336 cri.go:116] container: {ID:72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb Status:running}
	I0817 03:07:39.394850 1726336 cri.go:118] skipping 72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb - not in ps
	I0817 03:07:39.394857 1726336 cri.go:116] container: {ID:7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096 Status:running}
	I0817 03:07:39.394867 1726336 cri.go:118] skipping 7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096 - not in ps
	I0817 03:07:39.394875 1726336 cri.go:116] container: {ID:79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778 Status:running}
	I0817 03:07:39.394881 1726336 cri.go:118] skipping 79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778 - not in ps
	I0817 03:07:39.394887 1726336 cri.go:116] container: {ID:7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a Status:running}
	I0817 03:07:39.394894 1726336 cri.go:116] container: {ID:8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a Status:running}
	I0817 03:07:39.394901 1726336 cri.go:118] skipping 8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a - not in ps
	I0817 03:07:39.394908 1726336 cri.go:116] container: {ID:92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9 Status:running}
	I0817 03:07:39.394913 1726336 cri.go:118] skipping 92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9 - not in ps
	I0817 03:07:39.394917 1726336 cri.go:116] container: {ID:cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b Status:running}
	I0817 03:07:39.394927 1726336 cri.go:118] skipping cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b - not in ps
	I0817 03:07:39.394931 1726336 cri.go:116] container: {ID:ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380 Status:running}
	I0817 03:07:39.394938 1726336 cri.go:118] skipping ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380 - not in ps
	I0817 03:07:39.394942 1726336 cri.go:116] container: {ID:d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9 Status:running}
	I0817 03:07:39.394952 1726336 cri.go:118] skipping d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9 - not in ps
	I0817 03:07:39.394956 1726336 cri.go:116] container: {ID:dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf Status:running}
	I0817 03:07:39.394963 1726336 cri.go:116] container: {ID:e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955 Status:running}
	I0817 03:07:39.394973 1726336 cri.go:116] container: {ID:ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445 Status:running}
	I0817 03:07:39.394978 1726336 cri.go:116] container: {ID:f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9 Status:running}
	I0817 03:07:39.394987 1726336 cri.go:116] container: {ID:feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb Status:running}
	I0817 03:07:39.395030 1726336 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62
	I0817 03:07:39.408099 1726336 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62 293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095
	I0817 03:07:39.419656 1726336 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62 293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T03:07:39Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0817 03:07:39.960325 1726336 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:07:39.969711 1726336 pause.go:50] kubelet running: false
	I0817 03:07:39.969761 1726336 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 03:07:40.091986 1726336 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 03:07:40.092073 1726336 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 03:07:40.169142 1726336 cri.go:76] found id: "7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a"
	I0817 03:07:40.169194 1726336 cri.go:76] found id: "20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62"
	I0817 03:07:40.169205 1726336 cri.go:76] found id: "e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955"
	I0817 03:07:40.169210 1726336 cri.go:76] found id: "f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9"
	I0817 03:07:40.169215 1726336 cri.go:76] found id: "feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb"
	I0817 03:07:40.169219 1726336 cri.go:76] found id: "dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf"
	I0817 03:07:40.169228 1726336 cri.go:76] found id: "ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445"
	I0817 03:07:40.169237 1726336 cri.go:76] found id: "293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095"
	I0817 03:07:40.169241 1726336 cri.go:76] found id: "104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a"
	I0817 03:07:40.169249 1726336 cri.go:76] found id: "08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2"
	I0817 03:07:40.169258 1726336 cri.go:76] found id: ""
	I0817 03:07:40.169300 1726336 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:07:40.215453 1726336 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2","pid":5831,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2/rootfs","created":"2021-08-17T03:07:16.656250841Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62","pid":5350,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62","rootfs":"/run/containerd/io.containerd
.runtime.v2.task/k8s.io/20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62/rootfs","created":"2021-08-17T03:07:13.79495153Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095","pid":4450,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095/rootfs","created":"2021-08-17T03:06:47.29412497Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778"},"owner
":"root"},{"ociVersion":"1.0.2-dev","id":"3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a","pid":4364,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a/rootfs","created":"2021-08-17T03:06:47.14071062Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210817025908-1554185_a908c4e86c4dc10972037b6ad13dcec4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9","pid":5662,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec
76a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9/rootfs","created":"2021-08-17T03:07:16.17151077Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-48wrs_1573fd26-713e-4757-9c30-cdb6f8181a96"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2","pid":5904,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2/rootfs","created":"2021-08-17T03:07:16.944591119Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id"
:"6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-gdlf6_2ecf1109-a9f9-4504-a5ac-e2dd767aa611"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb","pid":4373,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb/rootfs","created":"2021-08-17T03:06:47.152959534Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210817025908-1554185_8e3bfc73bc6ac50cb0f18d4529d2797a"},"owner":"root"},{"ociVersion":"1.0.2-dev",
"id":"7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096","pid":4954,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096/rootfs","created":"2021-08-17T03:07:12.714233212Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-cjs8q_9d9df0cd-9f52-42a0-80dc-0d78009fd46c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778","pid":4327,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8
s.io/79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778/rootfs","created":"2021-08-17T03:06:47.108496999Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210817025908-1554185_4971afa8502bf87c2dd20a55295f82dc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a","pid":5656,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a/rootfs","created":"2021-08-17T03:07:16.182919652Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kuberne
tes.cri.sandbox-id":"92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a","pid":4390,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a/rootfs","created":"2021-08-17T03:06:47.185430974Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210817025908-1554185_4eda981fb5c431a16a0ca59222d4300c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9","pid":5571,"status":"running","bundle":"/run/containerd/io
.containerd.runtime.v2.task/k8s.io/92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9/rootfs","created":"2021-08-17T03:07:15.914092813Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_09ad25fa-17f2-48b6-b8fc-fe277ad894a1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b","pid":4983,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b/rootfs","created":"2021-08-17T03:07:12.831036353Z","annot
ations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-cnwp8_76ff7eb8-7cd3-45f4-8651-a91c5f883da1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380","pid":5226,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380/rootfs","created":"2021-08-17T03:07:13.544756899Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-cstpw_27d17672-32a4-41f6-829e-7536a182784e"},"owner"
:"root"},{"ociVersion":"1.0.2-dev","id":"d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9","pid":5792,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9/rootfs","created":"2021-08-17T03:07:16.562251606Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-wrgrg_4538c3b1-3b7e-491b-8874-738d8af30420"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf","pid":4539,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e2750
3fdf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf/rootfs","created":"2021-08-17T03:06:47.387108361Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955","pid":5139,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955/rootfs","created":"2021-08-17T03:07:13.329901749Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cdce4507fb252e15da817ea7ce4ba7b
b054de0e6bc6ce4466e25a1fbe6d2c78b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445","pid":4507,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445/rootfs","created":"2021-08-17T03:06:47.406414064Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9","pid":5148,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.
io/f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9/rootfs","created":"2021-08-17T03:07:13.326472181Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb","pid":4509,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb/rootfs","created":"2021-08-17T03:06:47.347464065Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a"},"owner":"root"}]
	I0817 03:07:40.215706 1726336 cri.go:113] list returned 20 containers
	I0817 03:07:40.215720 1726336 cri.go:116] container: {ID:08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2 Status:paused}
	I0817 03:07:40.215731 1726336 cri.go:122] skipping {08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2 paused}: state = "paused", want "running"
	I0817 03:07:40.215743 1726336 cri.go:116] container: {ID:20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62 Status:paused}
	I0817 03:07:40.215753 1726336 cri.go:122] skipping {20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62 paused}: state = "paused", want "running"
	I0817 03:07:40.215759 1726336 cri.go:116] container: {ID:293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095 Status:running}
	I0817 03:07:40.215768 1726336 cri.go:116] container: {ID:3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a Status:running}
	I0817 03:07:40.215774 1726336 cri.go:118] skipping 3a5be5205d51f37f851aa420fd3c33cd27c20a8547799c9503fe5968c83fef7a - not in ps
	I0817 03:07:40.215783 1726336 cri.go:116] container: {ID:52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9 Status:running}
	I0817 03:07:40.215789 1726336 cri.go:118] skipping 52c0c534c8cc85f2dc47c29739ebee4e875d16eddaf607d4ffc09a159bec76a9 - not in ps
	I0817 03:07:40.215795 1726336 cri.go:116] container: {ID:6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2 Status:running}
	I0817 03:07:40.215801 1726336 cri.go:118] skipping 6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2 - not in ps
	I0817 03:07:40.215810 1726336 cri.go:116] container: {ID:72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb Status:running}
	I0817 03:07:40.215815 1726336 cri.go:118] skipping 72070b39b4062a812ec7293d6f4cce239b3d071c9730dc73ce07f316285c48eb - not in ps
	I0817 03:07:40.215820 1726336 cri.go:116] container: {ID:7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096 Status:running}
	I0817 03:07:40.215825 1726336 cri.go:118] skipping 7291df23083661b5ecfb57e68ff2c267c13efd15a06bba32f25eb13076e62096 - not in ps
	I0817 03:07:40.215832 1726336 cri.go:116] container: {ID:79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778 Status:running}
	I0817 03:07:40.215837 1726336 cri.go:118] skipping 79b3b926c5f79d0b6655029fef788af53c5d8b1841c6671ef18812593a4ca778 - not in ps
	I0817 03:07:40.215844 1726336 cri.go:116] container: {ID:7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a Status:running}
	I0817 03:07:40.215850 1726336 cri.go:116] container: {ID:8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a Status:running}
	I0817 03:07:40.215859 1726336 cri.go:118] skipping 8cb8e2f03ce9b1f9ec9da556f102846b5ae2e8d904dc0392a259a5f749bf2d8a - not in ps
	I0817 03:07:40.215863 1726336 cri.go:116] container: {ID:92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9 Status:running}
	I0817 03:07:40.215869 1726336 cri.go:118] skipping 92635160ad3b920c7d6e3273ca611986286d954d70c3e753cf1f3154677713d9 - not in ps
	I0817 03:07:40.215880 1726336 cri.go:116] container: {ID:cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b Status:running}
	I0817 03:07:40.215885 1726336 cri.go:118] skipping cdce4507fb252e15da817ea7ce4ba7bb054de0e6bc6ce4466e25a1fbe6d2c78b - not in ps
	I0817 03:07:40.215891 1726336 cri.go:116] container: {ID:ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380 Status:running}
	I0817 03:07:40.215897 1726336 cri.go:118] skipping ce0590e491e2d1c1ce2770ab4d588b2a044a5930a3aab61f73f676539d589380 - not in ps
	I0817 03:07:40.215905 1726336 cri.go:116] container: {ID:d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9 Status:running}
	I0817 03:07:40.215910 1726336 cri.go:118] skipping d0ffff888b08e05b72bd66d521bc98a1bbf596da0ad45505bc7f02111f002dd9 - not in ps
	I0817 03:07:40.215918 1726336 cri.go:116] container: {ID:dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf Status:running}
	I0817 03:07:40.215927 1726336 cri.go:116] container: {ID:e08609b3518218c77efbbedb7de602d12cc462c02652fe775940f8ed3b802955 Status:running}
	I0817 03:07:40.215932 1726336 cri.go:116] container: {ID:ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445 Status:running}
	I0817 03:07:40.215939 1726336 cri.go:116] container: {ID:f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9 Status:running}
	I0817 03:07:40.215947 1726336 cri.go:116] container: {ID:feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb Status:running}
	I0817 03:07:40.215987 1726336 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095
	I0817 03:07:40.228988 1726336 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095 7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a
	I0817 03:07:40.243652 1726336 out.go:177] 
	W0817 03:07:40.243775 1726336 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095 7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T03:07:40Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095 7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T03:07:40Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0817 03:07:40.243790 1726336 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0817 03:07:40.251751 1726336 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_2.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_2.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0817 03:07:40.253302 1726336 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-arm64 pause -p embed-certs-20210817025908-1554185 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210817025908-1554185
helpers_test.go:236: (dbg) docker inspect embed-certs-20210817025908-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8eaed548049df08f308d0f55517067ceff943271942ff3108ad4ef71f7217982",
	        "Created": "2021-08-17T02:59:10.017105184Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1709655,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T03:01:43.68006688Z",
	            "FinishedAt": "2021-08-17T03:01:42.402103208Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/8eaed548049df08f308d0f55517067ceff943271942ff3108ad4ef71f7217982/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8eaed548049df08f308d0f55517067ceff943271942ff3108ad4ef71f7217982/hostname",
	        "HostsPath": "/var/lib/docker/containers/8eaed548049df08f308d0f55517067ceff943271942ff3108ad4ef71f7217982/hosts",
	        "LogPath": "/var/lib/docker/containers/8eaed548049df08f308d0f55517067ceff943271942ff3108ad4ef71f7217982/8eaed548049df08f308d0f55517067ceff943271942ff3108ad4ef71f7217982-json.log",
	        "Name": "/embed-certs-20210817025908-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210817025908-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210817025908-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/637b8df46cb9d9449bfdfddfb16834a2df92a9981b6c328fe54de322826b7b99-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/637b8df46cb9d9449bfdfddfb16834a2df92a9981b6c328fe54de322826b7b99/merged",
	                "UpperDir": "/var/lib/docker/overlay2/637b8df46cb9d9449bfdfddfb16834a2df92a9981b6c328fe54de322826b7b99/diff",
	                "WorkDir": "/var/lib/docker/overlay2/637b8df46cb9d9449bfdfddfb16834a2df92a9981b6c328fe54de322826b7b99/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210817025908-1554185",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210817025908-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210817025908-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210817025908-1554185",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210817025908-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484ca8a5cfea5268a6896565c7b3a9ff84020fcd7153dc5b7c56e4bc38e80c1e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50483"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50482"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50479"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50481"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50480"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/484ca8a5cfea",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210817025908-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8eaed548049d",
	                        "embed-certs-20210817025908-1554185"
	                    ],
	                    "NetworkID": "05d569d1a6658b6ba8512401795e744ed2f9e1daa9e68f59cf931f36c4b889a3",
	                    "EndpointID": "99671e92ebfaade2f4af3980878ad3f2bd130905340becc7a1b5b24ea2e7cc75",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210817025908-1554185 -n embed-certs-20210817025908-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210817025908-1554185 -n embed-certs-20210817025908-1554185: exit status 2 (338.734245ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-20210817025908-1554185 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:253: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                      Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | force-systemd-flag-20210817024631-1554185         | force-systemd-flag-20210817024631-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:25 UTC | Tue, 17 Aug 2021 02:47:25 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                                   |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-flag-20210817024631-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:25 UTC | Tue, 17 Aug 2021 02:47:28 UTC |
	|         | force-systemd-flag-20210817024631-1554185         |                                                   |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210817024307-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:20 UTC | Tue, 17 Aug 2021 02:48:02 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185         |                                                   |         |         |                               |                               |
	|         | --memory=2200                                     |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210817024307-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:02 UTC | Tue, 17 Aug 2021 02:48:05 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185         |                                                   |         |         |                               |                               |
	| start   | -p                                                | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:28 UTC | Tue, 17 Aug 2021 02:48:49 UTC |
	|         | cert-options-20210817024728-1554185               |                                                   |         |         |                               |                               |
	|         | --memory=2048                                     |                                                   |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                                   |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                                   |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                                   |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| -p      | cert-options-20210817024728-1554185               | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:49 UTC | Tue, 17 Aug 2021 02:48:49 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                                   |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                                   |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:49 UTC | Tue, 17 Aug 2021 02:48:52 UTC |
	|         | cert-options-20210817024728-1554185               |                                                   |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:05 UTC | Tue, 17 Aug 2021 02:50:20 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                   |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                   |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                   |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:29 UTC | Tue, 17 Aug 2021 02:50:29 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:30 UTC | Tue, 17 Aug 2021 02:50:50 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:50 UTC | Tue, 17 Aug 2021 02:50:50 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:52 UTC | Tue, 17 Aug 2021 02:50:54 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:02 UTC | Tue, 17 Aug 2021 02:51:03 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:03 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:57:04 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:57:15 UTC | Tue, 17 Aug 2021 02:57:15 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:05 UTC | Tue, 17 Aug 2021 02:59:08 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:08 UTC | Tue, 17 Aug 2021 02:59:08 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:08 UTC | Tue, 17 Aug 2021 03:01:12 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:21 UTC | Tue, 17 Aug 2021 03:01:22 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:22 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:07:27 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:38 UTC | Tue, 17 Aug 2021 03:07:38 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 03:01:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 03:01:42.915636 1709430 out.go:298] Setting OutFile to fd 1 ...
	I0817 03:01:42.915815 1709430 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:01:42.915825 1709430 out.go:311] Setting ErrFile to fd 2...
	I0817 03:01:42.915829 1709430 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:01:42.915955 1709430 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 03:01:42.916188 1709430 out.go:305] Setting JSON to false
	I0817 03:01:42.917110 1709430 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38641,"bootTime":1629130662,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 03:01:42.917187 1709430 start.go:121] virtualization:  
	I0817 03:01:42.919362 1709430 out.go:177] * [embed-certs-20210817025908-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 03:01:42.920883 1709430 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 03:01:42.919510 1709430 notify.go:169] Checking for updates...
	I0817 03:01:42.922656 1709430 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:01:42.924352 1709430 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 03:01:42.926083 1709430 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 03:01:42.926489 1709430 config.go:177] Loaded profile config "embed-certs-20210817025908-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 03:01:42.926938 1709430 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 03:01:42.966220 1709430 docker.go:132] docker version: linux-20.10.8
	I0817 03:01:42.966292 1709430 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:01:43.109734 1709430 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:01:43.035488435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 03:01:43.109885 1709430 docker.go:244] overlay module found
	I0817 03:01:43.112560 1709430 out.go:177] * Using the docker driver based on existing profile
	I0817 03:01:43.112580 1709430 start.go:278] selected driver: docker
	I0817 03:01:43.112586 1709430 start.go:751] validating driver "docker" against &{Name:embed-certs-20210817025908-1554185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210817025908-1554185 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout
:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:01:43.112704 1709430 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 03:01:43.112741 1709430 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:01:43.112750 1709430 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:01:43.113917 1709430 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:01:43.114457 1709430 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:01:43.240185 1709430 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:01:43.161496688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	W0817 03:01:43.240305 1709430 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:01:43.240324 1709430 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:01:43.242084 1709430 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:01:43.242179 1709430 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 03:01:43.242202 1709430 cni.go:93] Creating CNI manager for ""
	I0817 03:01:43.242210 1709430 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:01:43.242235 1709430 start_flags.go:277] config:
	{Name:embed-certs-20210817025908-1554185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210817025908-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeReque
sted:false ExtraDisks:0}
	I0817 03:01:43.244150 1709430 out.go:177] * Starting control plane node embed-certs-20210817025908-1554185 in cluster embed-certs-20210817025908-1554185
	I0817 03:01:43.244175 1709430 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 03:01:43.245724 1709430 out.go:177] * Pulling base image ...
	I0817 03:01:43.245741 1709430 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 03:01:43.245775 1709430 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 03:01:43.245783 1709430 cache.go:56] Caching tarball of preloaded images
	I0817 03:01:43.245933 1709430 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 03:01:43.245947 1709430 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 03:01:43.246055 1709430 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/config.json ...
	I0817 03:01:43.246214 1709430 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 03:01:43.302552 1709430 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 03:01:43.302578 1709430 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 03:01:43.302588 1709430 cache.go:205] Successfully downloaded all kic artifacts
	I0817 03:01:43.302624 1709430 start.go:313] acquiring machines lock for embed-certs-20210817025908-1554185: {Name:mkc8f6524c9d90ccbc42094864dd90d7c2463223 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:01:43.302708 1709430 start.go:317] acquired machines lock for "embed-certs-20210817025908-1554185" in 58.248µs
	I0817 03:01:43.302730 1709430 start.go:93] Skipping create...Using existing machine configuration
	I0817 03:01:43.302735 1709430 fix.go:55] fixHost starting: 
	I0817 03:01:43.303098 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:01:43.333163 1709430 fix.go:108] recreateIfNeeded on embed-certs-20210817025908-1554185: state=Stopped err=<nil>
	W0817 03:01:43.333191 1709430 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 03:01:43.335145 1709430 out.go:177] * Restarting existing docker container for "embed-certs-20210817025908-1554185" ...
	I0817 03:01:43.335200 1709430 cli_runner.go:115] Run: docker start embed-certs-20210817025908-1554185
	I0817 03:01:43.688530 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:01:43.732399 1709430 kic.go:420] container "embed-certs-20210817025908-1554185" state is running.
	I0817 03:01:43.732746 1709430 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210817025908-1554185
	I0817 03:01:43.780842 1709430 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/config.json ...
	I0817 03:01:43.781022 1709430 machine.go:88] provisioning docker machine ...
	I0817 03:01:43.781036 1709430 ubuntu.go:169] provisioning hostname "embed-certs-20210817025908-1554185"
	I0817 03:01:43.781081 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:43.818557 1709430 main.go:130] libmachine: Using SSH client type: native
	I0817 03:01:43.819051 1709430 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50483 <nil> <nil>}
	I0817 03:01:43.819126 1709430 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210817025908-1554185 && echo "embed-certs-20210817025908-1554185" | sudo tee /etc/hostname
	I0817 03:01:43.819693 1709430 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43842->127.0.0.1:50483: read: connection reset by peer
	I0817 03:01:46.941429 1709430 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210817025908-1554185
	
	I0817 03:01:46.941509 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:46.973475 1709430 main.go:130] libmachine: Using SSH client type: native
	I0817 03:01:46.973643 1709430 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50483 <nil> <nil>}
	I0817 03:01:46.973672 1709430 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210817025908-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210817025908-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210817025908-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 03:01:47.098196 1709430 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 03:01:47.098263 1709430 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 03:01:47.098302 1709430 ubuntu.go:177] setting up certificates
	I0817 03:01:47.098337 1709430 provision.go:83] configureAuth start
	I0817 03:01:47.098419 1709430 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210817025908-1554185
	I0817 03:01:47.140418 1709430 provision.go:138] copyHostCerts
	I0817 03:01:47.140475 1709430 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 03:01:47.140490 1709430 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 03:01:47.140552 1709430 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 03:01:47.140638 1709430 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 03:01:47.140647 1709430 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 03:01:47.140669 1709430 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 03:01:47.140724 1709430 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 03:01:47.140732 1709430 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 03:01:47.140752 1709430 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 03:01:47.140796 1709430 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210817025908-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210817025908-1554185]
	I0817 03:01:47.563754 1709430 provision.go:172] copyRemoteCerts
	I0817 03:01:47.563839 1709430 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 03:01:47.563897 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:47.594589 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:47.676748 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 03:01:47.691618 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0817 03:01:47.707188 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 03:01:47.721435 1709430 provision.go:86] duration metric: configureAuth took 623.075101ms
	I0817 03:01:47.721456 1709430 ubuntu.go:193] setting minikube options for container-runtime
	I0817 03:01:47.721620 1709430 config.go:177] Loaded profile config "embed-certs-20210817025908-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 03:01:47.721633 1709430 machine.go:91] provisioned docker machine in 3.94060428s
	I0817 03:01:47.721640 1709430 start.go:267] post-start starting for "embed-certs-20210817025908-1554185" (driver="docker")
	I0817 03:01:47.721653 1709430 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 03:01:47.721699 1709430 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 03:01:47.721738 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:47.753024 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:47.836811 1709430 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 03:01:47.839115 1709430 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 03:01:47.839138 1709430 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 03:01:47.839151 1709430 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 03:01:47.839156 1709430 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 03:01:47.839164 1709430 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 03:01:47.839207 1709430 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 03:01:47.839292 1709430 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 03:01:47.839383 1709430 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 03:01:47.845028 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:01:47.859892 1709430 start.go:270] post-start completed in 138.235488ms
	I0817 03:01:47.862563 1709430 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 03:01:47.862604 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:47.895396 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:47.981879 1709430 fix.go:57] fixHost completed within 4.679139366s
	I0817 03:01:47.981902 1709430 start.go:80] releasing machines lock for "embed-certs-20210817025908-1554185", held for 4.679182122s
	I0817 03:01:47.981973 1709430 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210817025908-1554185
	I0817 03:01:48.018361 1709430 ssh_runner.go:149] Run: systemctl --version
	I0817 03:01:48.018413 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:48.018620 1709430 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 03:01:48.018669 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:48.084825 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:48.109792 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:48.182191 1709430 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 03:01:48.477295 1709430 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 03:01:48.486407 1709430 docker.go:153] disabling docker service ...
	I0817 03:01:48.486452 1709430 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 03:01:48.495395 1709430 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 03:01:48.503277 1709430 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 03:01:48.574458 1709430 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 03:01:48.650525 1709430 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 03:01:48.658005 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 03:01:48.668723 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 03:01:48.680039 1709430 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 03:01:48.685607 1709430 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 03:01:48.691075 1709430 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 03:01:48.770865 1709430 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 03:01:48.856461 1709430 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 03:01:48.856555 1709430 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 03:01:48.860033 1709430 start.go:413] Will wait 60s for crictl version
	I0817 03:01:48.860113 1709430 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:01:48.885394 1709430 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T03:01:48Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 03:01:59.932195 1709430 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:01:59.954055 1709430 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 03:01:59.954117 1709430 ssh_runner.go:149] Run: containerd --version
	I0817 03:01:59.974914 1709430 ssh_runner.go:149] Run: containerd --version
	I0817 03:01:59.996782 1709430 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 03:01:59.996854 1709430 cli_runner.go:115] Run: docker network inspect embed-certs-20210817025908-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 03:02:00.034307 1709430 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 03:02:00.037446 1709430 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:02:00.046058 1709430 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 03:02:00.046122 1709430 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:02:00.081340 1709430 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:02:00.081357 1709430 containerd.go:517] Images already preloaded, skipping extraction
	I0817 03:02:00.081401 1709430 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:02:00.108655 1709430 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:02:00.108676 1709430 cache_images.go:74] Images are preloaded, skipping loading
	I0817 03:02:00.108741 1709430 ssh_runner.go:149] Run: sudo crictl info
	I0817 03:02:00.143555 1709430 cni.go:93] Creating CNI manager for ""
	I0817 03:02:00.143577 1709430 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:02:00.143588 1709430 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 03:02:00.143605 1709430 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210817025908-1554185 NodeName:embed-certs-20210817025908-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 03:02:00.143742 1709430 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20210817025908-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 03:02:00.143826 1709430 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20210817025908-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210817025908-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 03:02:00.143885 1709430 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 03:02:00.151550 1709430 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 03:02:00.151608 1709430 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 03:02:00.158110 1709430 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (579 bytes)
	I0817 03:02:00.172909 1709430 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 03:02:00.185604 1709430 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0817 03:02:00.198148 1709430 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 03:02:00.202587 1709430 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:02:00.211935 1709430 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185 for IP: 192.168.49.2
	I0817 03:02:00.211985 1709430 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 03:02:00.212005 1709430 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 03:02:00.212058 1709430 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/client.key
	I0817 03:02:00.212079 1709430 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/apiserver.key.dd3b5fb2
	I0817 03:02:00.212099 1709430 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/proxy-client.key
	I0817 03:02:00.212189 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 03:02:00.212226 1709430 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 03:02:00.212240 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 03:02:00.212263 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 03:02:00.212302 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 03:02:00.212327 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 03:02:00.212374 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:02:00.213402 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 03:02:00.233903 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 03:02:00.257339 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 03:02:00.272567 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 03:02:00.287332 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 03:02:00.303591 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 03:02:00.323416 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 03:02:00.338181 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 03:02:00.352831 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 03:02:00.367365 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 03:02:00.381902 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 03:02:00.396438 1709430 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 03:02:00.407669 1709430 ssh_runner.go:149] Run: openssl version
	I0817 03:02:00.411901 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 03:02:00.417999 1709430 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 03:02:00.420591 1709430 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 03:02:00.420649 1709430 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 03:02:00.424886 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 03:02:00.430590 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 03:02:00.436691 1709430 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:02:00.439330 1709430 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:02:00.439385 1709430 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:02:00.443503 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 03:02:00.449205 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 03:02:00.455268 1709430 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 03:02:00.457833 1709430 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 03:02:00.457897 1709430 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 03:02:00.462009 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 03:02:00.467819 1709430 kubeadm.go:390] StartCluster: {Name:embed-certs-20210817025908-1554185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210817025908-1554185 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:
<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:02:00.467915 1709430 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 03:02:00.467970 1709430 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:02:00.490420 1709430 cri.go:76] found id: "3819e01cd0f969187e57a4fc98222a3ea0a77d2454e68ae4ff98e41dcc7b0dc0"
	I0817 03:02:00.490438 1709430 cri.go:76] found id: "147743153867e7bf2b79e1d98cdc96505cf6c5e444c638a5408425fb70ca87ab"
	I0817 03:02:00.490443 1709430 cri.go:76] found id: "45a6ad2bca9734848b2137f94e723e008ca4f9cc5d608fa41c9369f26199484b"
	I0817 03:02:00.490448 1709430 cri.go:76] found id: "26d57a9cd98538e0290ecfdecf0aa55fda8c4c1e494d7a693872b191c1e219ee"
	I0817 03:02:00.490452 1709430 cri.go:76] found id: "b83c757a97acfc28207d3b12c22e7e51480e2b318d6e31cd05d9fb8ba40be727"
	I0817 03:02:00.490457 1709430 cri.go:76] found id: "c587ada33b489b69e8309fa4e6da6008b66f1115ea3dfaf148d54dc4fd5e0d3e"
	I0817 03:02:00.490463 1709430 cri.go:76] found id: "c1a9cf0a32385766092ed94ffabdd1ca185dd517336d62b9acc0fa629e0e2775"
	I0817 03:02:00.490468 1709430 cri.go:76] found id: "43637e1ccb03e97bf246cf21f7f40e791e999d191b70bb361f253d5e28f7711e"
	I0817 03:02:00.490478 1709430 cri.go:76] found id: ""
	I0817 03:02:00.490512 1709430 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:02:00.502926 1709430 cri.go:103] JSON = null
	W0817 03:02:00.502962 1709430 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0817 03:02:00.503016 1709430 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 03:02:00.508722 1709430 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 03:02:00.508744 1709430 kubeadm.go:600] restartCluster start
	I0817 03:02:00.508777 1709430 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 03:02:00.514130 1709430 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:00.514999 1709430 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210817025908-1554185" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:02:00.515236 1709430 kubeconfig.go:128] "embed-certs-20210817025908-1554185" context is missing from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0817 03:02:00.515752 1709430 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:02:00.517932 1709430 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 03:02:00.523411 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:00.523458 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:00.532371 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:00.732714 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:00.732781 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:00.741454 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:00.932728 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:00.932776 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:00.941590 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.132831 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.132935 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.143133 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.333433 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.333504 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.342847 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.533149 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.533202 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.542098 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.733346 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.733423 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.742171 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.933424 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.933503 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.942215 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.133501 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.133589 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.144077 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.333428 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.333518 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.342978 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.533200 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.533285 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.542303 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.732496 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.732541 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.741347 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.932764 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.932815 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.941561 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.132828 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:03.132954 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:03.145350 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.332600 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:03.332663 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:03.341975 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.533200 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:03.533260 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:03.542160 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.542171 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:03.542205 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:03.550805 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.550831 1709430 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0817 03:02:03.550837 1709430 kubeadm.go:1032] stopping kube-system containers ...
	I0817 03:02:03.550848 1709430 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 03:02:03.550890 1709430 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:02:03.573081 1709430 cri.go:76] found id: "3819e01cd0f969187e57a4fc98222a3ea0a77d2454e68ae4ff98e41dcc7b0dc0"
	I0817 03:02:03.573100 1709430 cri.go:76] found id: "147743153867e7bf2b79e1d98cdc96505cf6c5e444c638a5408425fb70ca87ab"
	I0817 03:02:03.573105 1709430 cri.go:76] found id: "45a6ad2bca9734848b2137f94e723e008ca4f9cc5d608fa41c9369f26199484b"
	I0817 03:02:03.573110 1709430 cri.go:76] found id: "26d57a9cd98538e0290ecfdecf0aa55fda8c4c1e494d7a693872b191c1e219ee"
	I0817 03:02:03.573115 1709430 cri.go:76] found id: "b83c757a97acfc28207d3b12c22e7e51480e2b318d6e31cd05d9fb8ba40be727"
	I0817 03:02:03.573120 1709430 cri.go:76] found id: "c587ada33b489b69e8309fa4e6da6008b66f1115ea3dfaf148d54dc4fd5e0d3e"
	I0817 03:02:03.573125 1709430 cri.go:76] found id: "c1a9cf0a32385766092ed94ffabdd1ca185dd517336d62b9acc0fa629e0e2775"
	I0817 03:02:03.573129 1709430 cri.go:76] found id: "43637e1ccb03e97bf246cf21f7f40e791e999d191b70bb361f253d5e28f7711e"
	I0817 03:02:03.573133 1709430 cri.go:76] found id: ""
	I0817 03:02:03.573138 1709430 cri.go:221] Stopping containers: [3819e01cd0f969187e57a4fc98222a3ea0a77d2454e68ae4ff98e41dcc7b0dc0 147743153867e7bf2b79e1d98cdc96505cf6c5e444c638a5408425fb70ca87ab 45a6ad2bca9734848b2137f94e723e008ca4f9cc5d608fa41c9369f26199484b 26d57a9cd98538e0290ecfdecf0aa55fda8c4c1e494d7a693872b191c1e219ee b83c757a97acfc28207d3b12c22e7e51480e2b318d6e31cd05d9fb8ba40be727 c587ada33b489b69e8309fa4e6da6008b66f1115ea3dfaf148d54dc4fd5e0d3e c1a9cf0a32385766092ed94ffabdd1ca185dd517336d62b9acc0fa629e0e2775 43637e1ccb03e97bf246cf21f7f40e791e999d191b70bb361f253d5e28f7711e]
	I0817 03:02:03.573180 1709430 ssh_runner.go:149] Run: which crictl
	I0817 03:02:03.575701 1709430 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 3819e01cd0f969187e57a4fc98222a3ea0a77d2454e68ae4ff98e41dcc7b0dc0 147743153867e7bf2b79e1d98cdc96505cf6c5e444c638a5408425fb70ca87ab 45a6ad2bca9734848b2137f94e723e008ca4f9cc5d608fa41c9369f26199484b 26d57a9cd98538e0290ecfdecf0aa55fda8c4c1e494d7a693872b191c1e219ee b83c757a97acfc28207d3b12c22e7e51480e2b318d6e31cd05d9fb8ba40be727 c587ada33b489b69e8309fa4e6da6008b66f1115ea3dfaf148d54dc4fd5e0d3e c1a9cf0a32385766092ed94ffabdd1ca185dd517336d62b9acc0fa629e0e2775 43637e1ccb03e97bf246cf21f7f40e791e999d191b70bb361f253d5e28f7711e
	I0817 03:02:03.598212 1709430 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 03:02:03.607086 1709430 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 03:02:03.613074 1709430 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 17 02:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 17 02:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2075 Aug 17 03:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 17 02:59 /etc/kubernetes/scheduler.conf
	
	I0817 03:02:03.613128 1709430 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0817 03:02:03.618914 1709430 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0817 03:02:03.624793 1709430 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0817 03:02:03.630292 1709430 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.630355 1709430 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 03:02:03.635919 1709430 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0817 03:02:03.641434 1709430 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.641502 1709430 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 03:02:03.646893 1709430 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 03:02:03.652576 1709430 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 03:02:03.652614 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:03.715531 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:05.469382 1709430 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.753788332s)
	I0817 03:02:05.469407 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:05.630841 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:05.737765 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:05.802641 1709430 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:02:05.802701 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:06.312308 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:06.812440 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:07.311850 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:07.811839 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:08.312759 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:08.811837 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:09.311781 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:09.811827 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:10.312505 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:10.812802 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:11.311853 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:11.811838 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:12.312766 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:12.811823 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:13.312571 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:13.812682 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:14.311986 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:14.812787 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:15.312782 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:15.350503 1709430 api_server.go:70] duration metric: took 9.547861543s to wait for apiserver process to appear ...
	I0817 03:02:15.350522 1709430 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:02:15.350531 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:20.352792 1709430 api_server.go:255] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 03:02:20.853542 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:22.286600 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 03:02:22.286661 1709430 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 03:02:22.353817 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:22.428645 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 03:02:22.428696 1709430 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 03:02:22.853168 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:22.900944 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:02:22.900974 1709430 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:02:23.353240 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:23.365866 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:02:23.365915 1709430 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:02:23.853552 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:23.862728 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:02:23.880276 1709430 api_server.go:139] control plane version: v1.21.3
	I0817 03:02:23.880298 1709430 api_server.go:129] duration metric: took 8.529771373s to wait for apiserver health ...
	I0817 03:02:23.880307 1709430 cni.go:93] Creating CNI manager for ""
	I0817 03:02:23.880320 1709430 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:02:23.882619 1709430 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 03:02:23.882682 1709430 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 03:02:23.887761 1709430 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 03:02:23.887780 1709430 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 03:02:23.901886 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 03:02:24.613061 1709430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:02:24.627170 1709430 system_pods.go:59] 9 kube-system pods found
	I0817 03:02:24.627205 1709430 system_pods.go:61] "coredns-558bd4d5db-dgbzs" [69a5e40e-9bca-4e76-976f-7e87232e2501] Running
	I0817 03:02:24.627214 1709430 system_pods.go:61] "etcd-embed-certs-20210817025908-1554185" [7e3ff9cb-4663-44f8-bdeb-a6851dd56f03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 03:02:24.627235 1709430 system_pods.go:61] "kindnet-6s6ww" [582e5a12-d987-4cc2-b439-264038f7fdec] Running
	I0817 03:02:24.627250 1709430 system_pods.go:61] "kube-apiserver-embed-certs-20210817025908-1554185" [d2b29440-c8bb-4946-99af-a8f6af9d310e] Running
	I0817 03:02:24.627255 1709430 system_pods.go:61] "kube-controller-manager-embed-certs-20210817025908-1554185" [055f695a-0d98-43bb-bf98-4ef9b42a8f36] Running
	I0817 03:02:24.627259 1709430 system_pods.go:61] "kube-proxy-nxbdw" [f0cef6b9-79b0-4944-917c-a3a5d3ac0488] Running
	I0817 03:02:24.627272 1709430 system_pods.go:61] "kube-scheduler-embed-certs-20210817025908-1554185" [fc3d4c1d-1efb-47ec-bf4d-3b8f51f07643] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 03:02:24.627281 1709430 system_pods.go:61] "metrics-server-7c784ccb57-7snbh" [1e2242b2-d474-4e68-b3be-5c357740f82f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:02:24.627290 1709430 system_pods.go:61] "storage-provisioner" [486f6174-9eff-4afd-8b28-7f7f218f6341] Running
	I0817 03:02:24.627296 1709430 system_pods.go:74] duration metric: took 14.217459ms to wait for pod list to return data ...
	I0817 03:02:24.627312 1709430 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:02:24.630756 1709430 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:02:24.630785 1709430 node_conditions.go:123] node cpu capacity is 2
	I0817 03:02:24.630797 1709430 node_conditions.go:105] duration metric: took 3.48013ms to run NodePressure ...
	I0817 03:02:24.630836 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:24.880793 1709430 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0817 03:02:24.884801 1709430 kubeadm.go:746] kubelet initialised
	I0817 03:02:24.884820 1709430 kubeadm.go:747] duration metric: took 4.010036ms waiting for restarted kubelet to initialise ...
	I0817 03:02:24.884827 1709430 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:02:24.889652 1709430 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-dgbzs" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:24.902669 1709430 pod_ready.go:92] pod "coredns-558bd4d5db-dgbzs" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:24.902694 1709430 pod_ready.go:81] duration metric: took 13.016142ms waiting for pod "coredns-558bd4d5db-dgbzs" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:24.902704 1709430 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:26.912028 1709430 pod_ready.go:102] pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:29.412961 1709430 pod_ready.go:102] pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:31.412833 1709430 pod_ready.go:92] pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:31.412855 1709430 pod_ready.go:81] duration metric: took 6.510143114s waiting for pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:31.412884 1709430 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.895057 1683677 kubeadm.go:392] StartCluster complete in 11m24.90769791s
	I0817 03:02:32.895103 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 03:02:32.895159 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 03:02:32.918790 1683677 cri.go:76] found id: ""
	I0817 03:02:32.918806 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.918827 1683677 logs.go:272] No container was found matching "kube-apiserver"
	I0817 03:02:32.918833 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 03:02:32.918883 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 03:02:32.949174 1683677 cri.go:76] found id: ""
	I0817 03:02:32.949187 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.949193 1683677 logs.go:272] No container was found matching "etcd"
	I0817 03:02:32.949198 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 03:02:32.949239 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 03:02:32.969914 1683677 cri.go:76] found id: ""
	I0817 03:02:32.969929 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.969935 1683677 logs.go:272] No container was found matching "coredns"
	I0817 03:02:32.969939 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 03:02:32.969977 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 03:02:32.990332 1683677 cri.go:76] found id: ""
	I0817 03:02:32.990347 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.990353 1683677 logs.go:272] No container was found matching "kube-scheduler"
	I0817 03:02:32.990358 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 03:02:32.990402 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 03:02:33.012039 1683677 cri.go:76] found id: ""
	I0817 03:02:33.012053 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.012059 1683677 logs.go:272] No container was found matching "kube-proxy"
	I0817 03:02:33.012064 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 03:02:33.012102 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 03:02:33.032711 1683677 cri.go:76] found id: ""
	I0817 03:02:33.032724 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.032729 1683677 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 03:02:33.032734 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 03:02:33.032772 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 03:02:33.052565 1683677 cri.go:76] found id: ""
	I0817 03:02:33.052577 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.052582 1683677 logs.go:272] No container was found matching "storage-provisioner"
	I0817 03:02:33.052588 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 03:02:33.052623 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 03:02:33.077480 1683677 cri.go:76] found id: ""
	I0817 03:02:33.077492 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.077498 1683677 logs.go:272] No container was found matching "kube-controller-manager"
	I0817 03:02:33.077506 1683677 logs.go:123] Gathering logs for container status ...
	I0817 03:02:33.077517 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 03:02:33.100699 1683677 logs.go:123] Gathering logs for kubelet ...
	I0817 03:02:33.100718 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 03:02:33.129118 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:27 old-k8s-version-20210817024805-1554185 kubelet[14430]: F0817 03:02:27.532257   14430 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.138717 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:28 old-k8s-version-20210817024805-1554185 kubelet[14458]: F0817 03:02:28.447274   14458 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.148323 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:29 old-k8s-version-20210817024805-1554185 kubelet[14486]: F0817 03:02:29.466067   14486 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.157836 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:30 old-k8s-version-20210817024805-1554185 kubelet[14514]: F0817 03:02:30.451892   14514 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.167337 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:31 old-k8s-version-20210817024805-1554185 kubelet[14542]: F0817 03:02:31.453780   14542 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.176828 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:32 old-k8s-version-20210817024805-1554185 kubelet[14570]: F0817 03:02:32.493682   14570 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.176996 1683677 logs.go:123] Gathering logs for dmesg ...
	I0817 03:02:33.177009 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 03:02:33.193572 1683677 logs.go:123] Gathering logs for describe nodes ...
	I0817 03:02:33.193593 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0817 03:02:33.276403 1683677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0817 03:02:33.276429 1683677 logs.go:123] Gathering logs for containerd ...
	I0817 03:02:33.276441 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W0817 03:02:33.361580 1683677 out.go:371] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	W0817 03:02:33.361625 1683677 out.go:242] * 
	W0817 03:02:33.361820 1683677 out.go:242] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0817 03:02:33.361869 1683677 out.go:242] * 
	W0817 03:02:33.367626 1683677 out.go:242] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                                  │
	│                                                                                                                                                                │
	│    * Please attach the following file to the GitHub issue:                                                                                                     │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 03:02:33.369951 1683677 out.go:177] X Problems detected in kubelet:
	I0817 03:02:33.371762 1683677 out.go:177]   Aug 17 03:02:27 old-k8s-version-20210817024805-1554185 kubelet[14430]: F0817 03:02:27.532257   14430 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.373375 1683677 out.go:177]   Aug 17 03:02:28 old-k8s-version-20210817024805-1554185 kubelet[14458]: F0817 03:02:28.447274   14458 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.375775 1683677 out.go:177]   Aug 17 03:02:29 old-k8s-version-20210817024805-1554185 kubelet[14486]: F0817 03:02:29.466067   14486 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.379649 1683677 out.go:177] 
	W0817 03:02:33.379877 1683677 out.go:242] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0817 03:02:33.379979 1683677 out.go:242] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0817 03:02:33.380044 1683677 out.go:242] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0817 03:02:32.926388 1709430 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:32.926430 1709430 pod_ready.go:81] duration metric: took 1.513533882s waiting for pod "kube-apiserver-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.926455 1709430 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.933680 1709430 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:32.933699 1709430 pod_ready.go:81] duration metric: took 7.225348ms waiting for pod "kube-controller-manager-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.933710 1709430 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nxbdw" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.937869 1709430 pod_ready.go:92] pod "kube-proxy-nxbdw" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:32.937883 1709430 pod_ready.go:81] duration metric: took 4.167442ms waiting for pod "kube-proxy-nxbdw" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.937891 1709430 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.941486 1709430 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:32.941528 1709430 pod_ready.go:81] duration metric: took 3.629338ms waiting for pod "kube-scheduler-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.941550 1709430 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:35.015397 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:37.015738 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:39.514875 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:41.515474 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:44.015526 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:46.515154 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:48.516349 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:51.015902 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:53.016610 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:55.516927 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:58.015378 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:00.016324 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:02.514890 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:04.515285 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:06.515767 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:09.016036 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:11.515428 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:13.515969 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:16.015790 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:18.016205 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:20.515609 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:23.061025 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:25.515853 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:27.520053 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:30.016199 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:32.515382 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:34.515445 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:36.515518 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:39.015854 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:41.515524 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:43.518482 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:46.015697 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:48.515478 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:50.515773 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:53.016600 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:55.516532 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:58.015563 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:00.015859 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:02.016302 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:04.515245 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:06.516063 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:09.015932 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:11.515424 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:14.016419 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:16.515655 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:18.520563 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:21.015597 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:23.015859 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:25.514913 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:28.015392 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:30.015742 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:32.515332 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:35.016435 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:37.520050 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:40.016544 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:42.515734 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:45.016161 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:47.515706 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:50.015655 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:52.514906 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:54.515311 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:57.015594 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:59.016480 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:01.516175 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:04.015967 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:06.016228 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:08.515329 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:10.515452 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:13.015531 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:15.016010 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:17.515332 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:19.515502 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:21.515626 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:23.515748 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:26.015837 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:28.516107 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:31.015443 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:33.016321 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:35.516016 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:38.014953 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:40.015429 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:42.514805 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:44.515096 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:47.015351 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:49.016205 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:51.515273 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:53.516058 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:56.015790 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:58.016407 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:00.515481 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:02.520899 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:05.068189 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:07.516194 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:10.015808 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:12.016283 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:14.515104 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:17.015776 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:19.515286 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:21.516219 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:24.015645 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:26.015874 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:28.016108 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:30.016312 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:32.515227 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:33.011802 1709430 pod_ready.go:81] duration metric: took 4m0.070227103s waiting for pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace to be "Ready" ...
	E0817 03:06:33.011829 1709430 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0817 03:06:33.011847 1709430 pod_ready.go:38] duration metric: took 4m8.12699401s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:06:33.011876 1709430 kubeadm.go:604] restartCluster took 4m32.503126921s
	W0817 03:06:33.011996 1709430 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 03:06:33.012027 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0817 03:06:35.083075 1709430 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.07102551s)
	I0817 03:06:35.083140 1709430 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0817 03:06:35.092846 1709430 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 03:06:35.092902 1709430 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:06:35.115936 1709430 cri.go:76] found id: ""
	I0817 03:06:35.115984 1709430 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 03:06:35.122087 1709430 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 03:06:35.122134 1709430 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 03:06:35.129050 1709430 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 03:06:35.129082 1709430 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 03:06:35.407886 1709430 out.go:204]   - Generating certificates and keys ...
	I0817 03:06:37.162389 1709430 out.go:204]   - Booting up control plane ...
	I0817 03:06:57.233627 1709430 out.go:204]   - Configuring RBAC rules ...
	I0817 03:06:57.650410 1709430 cni.go:93] Creating CNI manager for ""
	I0817 03:06:57.650434 1709430 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:06:57.652653 1709430 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 03:06:57.652719 1709430 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 03:06:57.655675 1709430 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 03:06:57.655688 1709430 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 03:06:57.667447 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 03:06:58.040395 1709430 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 03:06:58.040521 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:06:58.040583 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=embed-certs-20210817025908-1554185 minikube.k8s.io/updated_at=2021_08_17T03_06_58_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:06:58.186502 1709430 ops.go:34] apiserver oom_adj: -16
	I0817 03:06:58.186627 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:06:58.766890 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:06:59.266830 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:06:59.766336 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:00.266383 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:00.767000 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:01.267124 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:01.767094 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:02.266349 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:02.766331 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:03.266338 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:03.766308 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:04.267140 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:04.766972 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:05.266794 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:05.766726 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:06.266684 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:06.766917 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:07.267140 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:07.766301 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:08.266344 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:08.767252 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:09.266788 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:09.766306 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:10.266605 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:10.767141 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:11.266958 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:11.766326 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:12.055816 1709430 kubeadm.go:985] duration metric: took 14.015338574s to wait for elevateKubeSystemPrivileges.
	I0817 03:07:12.055843 1709430 kubeadm.go:392] StartCluster complete in 5m11.588029043s
	I0817 03:07:12.055858 1709430 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:07:12.055936 1709430 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:07:12.057221 1709430 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:07:12.596076 1709430 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210817025908-1554185" rescaled to 1
	I0817 03:07:12.596148 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 03:07:12.596231 1709430 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 03:07:12.599437 1709430 out.go:177] * Verifying Kubernetes components...
	I0817 03:07:12.596510 1709430 config.go:177] Loaded profile config "embed-certs-20210817025908-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 03:07:12.596525 1709430 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0817 03:07:12.599568 1709430 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210817025908-1554185"
	I0817 03:07:12.599582 1709430 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210817025908-1554185"
	W0817 03:07:12.599588 1709430 addons.go:147] addon storage-provisioner should already be in state true
	I0817 03:07:12.599609 1709430 host.go:66] Checking if "embed-certs-20210817025908-1554185" exists ...
	I0817 03:07:12.600102 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:07:12.600276 1709430 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:07:12.600363 1709430 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210817025908-1554185"
	I0817 03:07:12.600396 1709430 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210817025908-1554185"
	I0817 03:07:12.600657 1709430 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210817025908-1554185"
	I0817 03:07:12.600690 1709430 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210817025908-1554185"
	W0817 03:07:12.600708 1709430 addons.go:147] addon metrics-server should already be in state true
	I0817 03:07:12.600749 1709430 host.go:66] Checking if "embed-certs-20210817025908-1554185" exists ...
	I0817 03:07:12.601378 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:07:12.602125 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:07:12.601598 1709430 addons.go:59] Setting dashboard=true in profile "embed-certs-20210817025908-1554185"
	I0817 03:07:12.602383 1709430 addons.go:135] Setting addon dashboard=true in "embed-certs-20210817025908-1554185"
	W0817 03:07:12.602391 1709430 addons.go:147] addon dashboard should already be in state true
	I0817 03:07:12.602410 1709430 host.go:66] Checking if "embed-certs-20210817025908-1554185" exists ...
	I0817 03:07:12.602855 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:07:12.771209 1709430 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210817025908-1554185"
	W0817 03:07:12.771230 1709430 addons.go:147] addon default-storageclass should already be in state true
	I0817 03:07:12.771254 1709430 host.go:66] Checking if "embed-certs-20210817025908-1554185" exists ...
	I0817 03:07:12.771701 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:07:12.776770 1709430 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 03:07:12.776886 1709430 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:07:12.776895 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 03:07:12.776952 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:07:12.780675 1709430 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210817025908-1554185" to be "Ready" ...
	I0817 03:07:12.781000 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 03:07:12.821302 1709430 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0817 03:07:12.823121 1709430 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0817 03:07:12.823168 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0817 03:07:12.823176 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0817 03:07:12.823229 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:07:12.820718 1709430 node_ready.go:49] node "embed-certs-20210817025908-1554185" has status "Ready":"True"
	I0817 03:07:12.823410 1709430 node_ready.go:38] duration metric: took 42.704745ms waiting for node "embed-certs-20210817025908-1554185" to be "Ready" ...
	I0817 03:07:12.823423 1709430 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:07:12.826446 1709430 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0817 03:07:12.826496 1709430 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 03:07:12.826507 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 03:07:12.826547 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:07:12.856105 1709430 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-cstpw" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:12.970871 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:07:12.976655 1709430 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 03:07:12.976671 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 03:07:12.976720 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:07:12.993936 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:07:13.034726 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:07:13.072093 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:07:13.467142 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0817 03:07:13.467165 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0817 03:07:13.478535 1709430 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 03:07:13.478553 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0817 03:07:13.502671 1709430 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 03:07:13.634770 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0817 03:07:13.634832 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0817 03:07:13.671905 1709430 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 03:07:13.671959 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 03:07:13.808076 1709430 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 03:07:13.808105 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 03:07:13.831406 1709430 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 03:07:13.903914 1709430 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:07:13.913456 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0817 03:07:13.913504 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0817 03:07:14.001089 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0817 03:07:14.001148 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0817 03:07:14.095464 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0817 03:07:14.095487 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0817 03:07:14.191659 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0817 03:07:14.191682 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0817 03:07:14.353288 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0817 03:07:14.353352 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0817 03:07:14.438064 1709430 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.657042641s)
	I0817 03:07:14.438131 1709430 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0817 03:07:14.482667 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0817 03:07:14.482699 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0817 03:07:14.574325 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 03:07:14.574386 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0817 03:07:14.647184 1709430 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.144449438s)
	I0817 03:07:14.687797 1709430 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 03:07:14.962488 1709430 pod_ready.go:102] pod "coredns-558bd4d5db-cstpw" in "kube-system" namespace has status "Ready":"False"
	I0817 03:07:15.090422 1709430 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.258942039s)
	I0817 03:07:15.090538 1709430 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210817025908-1554185"
	I0817 03:07:15.090503 1709430 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18653069s)
	I0817 03:07:15.799090 1709430 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.111220812s)
	I0817 03:07:15.801079 1709430 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0817 03:07:15.801144 1709430 addons.go:344] enableAddons completed in 3.204622086s
	I0817 03:07:17.384241 1709430 pod_ready.go:102] pod "coredns-558bd4d5db-cstpw" in "kube-system" namespace has status "Ready":"False"
	I0817 03:07:19.883706 1709430 pod_ready.go:92] pod "coredns-558bd4d5db-cstpw" in "kube-system" namespace has status "Ready":"True"
	I0817 03:07:19.883726 1709430 pod_ready.go:81] duration metric: took 7.027596445s waiting for pod "coredns-558bd4d5db-cstpw" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:19.883735 1709430 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-p9qqd" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:21.892953 1709430 pod_ready.go:102] pod "coredns-558bd4d5db-p9qqd" in "kube-system" namespace has status "Ready":"False"
	I0817 03:07:23.894292 1709430 pod_ready.go:102] pod "coredns-558bd4d5db-p9qqd" in "kube-system" namespace has status "Ready":"False"
	I0817 03:07:26.391489 1709430 pod_ready.go:97] error getting pod "coredns-558bd4d5db-p9qqd" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-p9qqd" not found
	I0817 03:07:26.391521 1709430 pod_ready.go:81] duration metric: took 6.507778506s waiting for pod "coredns-558bd4d5db-p9qqd" in "kube-system" namespace to be "Ready" ...
	E0817 03:07:26.391531 1709430 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-p9qqd" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-p9qqd" not found
	I0817 03:07:26.391538 1709430 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.395159 1709430 pod_ready.go:92] pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:07:26.395177 1709430 pod_ready.go:81] duration metric: took 3.630141ms waiting for pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.395189 1709430 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.398999 1709430 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:07:26.399015 1709430 pod_ready.go:81] duration metric: took 3.818282ms waiting for pod "kube-apiserver-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.399023 1709430 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.404141 1709430 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:07:26.404158 1709430 pod_ready.go:81] duration metric: took 5.128832ms waiting for pod "kube-controller-manager-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.404167 1709430 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cjs8q" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.412964 1709430 pod_ready.go:92] pod "kube-proxy-cjs8q" in "kube-system" namespace has status "Ready":"True"
	I0817 03:07:26.412982 1709430 pod_ready.go:81] duration metric: took 8.808893ms waiting for pod "kube-proxy-cjs8q" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.412991 1709430 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.591110 1709430 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:07:26.591125 1709430 pod_ready.go:81] duration metric: took 178.126156ms waiting for pod "kube-scheduler-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.591133 1709430 pod_ready.go:38] duration metric: took 13.767698541s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:07:26.591149 1709430 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:07:26.591192 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:07:26.607203 1709430 api_server.go:70] duration metric: took 14.010942606s to wait for apiserver process to appear ...
	I0817 03:07:26.607257 1709430 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:07:26.607278 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:07:26.615555 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:07:26.616288 1709430 api_server.go:139] control plane version: v1.21.3
	I0817 03:07:26.616301 1709430 api_server.go:129] duration metric: took 9.027861ms to wait for apiserver health ...
	I0817 03:07:26.616308 1709430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:07:26.794248 1709430 system_pods.go:59] 9 kube-system pods found
	I0817 03:07:26.794280 1709430 system_pods.go:61] "coredns-558bd4d5db-cstpw" [27d17672-32a4-41f6-829e-7536a182784e] Running
	I0817 03:07:26.794286 1709430 system_pods.go:61] "etcd-embed-certs-20210817025908-1554185" [ac48f9b3-c5d0-4533-9703-604b37fcf80a] Running
	I0817 03:07:26.794291 1709430 system_pods.go:61] "kindnet-cnwp8" [76ff7eb8-7cd3-45f4-8651-a91c5f883da1] Running
	I0817 03:07:26.794296 1709430 system_pods.go:61] "kube-apiserver-embed-certs-20210817025908-1554185" [ca9c6e16-335a-40a8-9194-c17f7fd7b828] Running
	I0817 03:07:26.794323 1709430 system_pods.go:61] "kube-controller-manager-embed-certs-20210817025908-1554185" [aa65bef2-d3bc-46e4-9bb0-3688bbdd8d34] Running
	I0817 03:07:26.794335 1709430 system_pods.go:61] "kube-proxy-cjs8q" [9d9df0cd-9f52-42a0-80dc-0d78009fd46c] Running
	I0817 03:07:26.794340 1709430 system_pods.go:61] "kube-scheduler-embed-certs-20210817025908-1554185" [83426ee4-c7fa-4cf3-a49f-d115b900a37e] Running
	I0817 03:07:26.794347 1709430 system_pods.go:61] "metrics-server-7c784ccb57-48wrs" [1573fd26-713e-4757-9c30-cdb6f8181a96] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:07:26.794360 1709430 system_pods.go:61] "storage-provisioner" [09ad25fa-17f2-48b6-b8fc-fe277ad894a1] Running
	I0817 03:07:26.794366 1709430 system_pods.go:74] duration metric: took 178.052984ms to wait for pod list to return data ...
	I0817 03:07:26.794373 1709430 default_sa.go:34] waiting for default service account to be created ...
	I0817 03:07:26.991145 1709430 default_sa.go:45] found service account: "default"
	I0817 03:07:26.991162 1709430 default_sa.go:55] duration metric: took 196.767324ms for default service account to be created ...
	I0817 03:07:26.991169 1709430 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 03:07:27.193745 1709430 system_pods.go:86] 9 kube-system pods found
	I0817 03:07:27.193778 1709430 system_pods.go:89] "coredns-558bd4d5db-cstpw" [27d17672-32a4-41f6-829e-7536a182784e] Running
	I0817 03:07:27.193785 1709430 system_pods.go:89] "etcd-embed-certs-20210817025908-1554185" [ac48f9b3-c5d0-4533-9703-604b37fcf80a] Running
	I0817 03:07:27.193790 1709430 system_pods.go:89] "kindnet-cnwp8" [76ff7eb8-7cd3-45f4-8651-a91c5f883da1] Running
	I0817 03:07:27.193797 1709430 system_pods.go:89] "kube-apiserver-embed-certs-20210817025908-1554185" [ca9c6e16-335a-40a8-9194-c17f7fd7b828] Running
	I0817 03:07:27.193803 1709430 system_pods.go:89] "kube-controller-manager-embed-certs-20210817025908-1554185" [aa65bef2-d3bc-46e4-9bb0-3688bbdd8d34] Running
	I0817 03:07:27.193807 1709430 system_pods.go:89] "kube-proxy-cjs8q" [9d9df0cd-9f52-42a0-80dc-0d78009fd46c] Running
	I0817 03:07:27.193813 1709430 system_pods.go:89] "kube-scheduler-embed-certs-20210817025908-1554185" [83426ee4-c7fa-4cf3-a49f-d115b900a37e] Running
	I0817 03:07:27.193824 1709430 system_pods.go:89] "metrics-server-7c784ccb57-48wrs" [1573fd26-713e-4757-9c30-cdb6f8181a96] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:07:27.193832 1709430 system_pods.go:89] "storage-provisioner" [09ad25fa-17f2-48b6-b8fc-fe277ad894a1] Running
	I0817 03:07:27.193846 1709430 system_pods.go:126] duration metric: took 202.650824ms to wait for k8s-apps to be running ...
	I0817 03:07:27.193853 1709430 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 03:07:27.193907 1709430 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:07:27.202944 1709430 system_svc.go:56] duration metric: took 9.088603ms WaitForService to wait for kubelet.
	I0817 03:07:27.202960 1709430 kubeadm.go:547] duration metric: took 14.606702951s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 03:07:27.202978 1709430 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:07:27.390641 1709430 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:07:27.390662 1709430 node_conditions.go:123] node cpu capacity is 2
	I0817 03:07:27.390673 1709430 node_conditions.go:105] duration metric: took 187.69011ms to run NodePressure ...
	I0817 03:07:27.390682 1709430 start.go:231] waiting for startup goroutines ...
	I0817 03:07:27.446694 1709430 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 03:07:27.449081 1709430 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210817025908-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	104d615c49012       523cad1a4df73       18 seconds ago      Exited              dashboard-metrics-scraper   1                   6a4894b67801f
	08e5ff1e049d0       85e6c0cff043f       24 seconds ago      Running             kubernetes-dashboard        0                   d0ffff888b08e
	7d3caf80168a2       ba04bb24b9575       25 seconds ago      Running             storage-provisioner         0                   92635160ad3b9
	20c35ea263834       1a1f05a2cd7c2       27 seconds ago      Running             coredns                     0                   ce0590e491e2d
	e08609b351821       f37b7c809e5dc       28 seconds ago      Running             kindnet-cni                 0                   cdce4507fb252
	f780ed4300cf6       4ea38350a1beb       28 seconds ago      Running             kube-proxy                  0                   7291df2308366
	feb9dce8235d6       31a3b96cefc1e       53 seconds ago      Running             kube-scheduler              0                   8cb8e2f03ce9b
	dcde32be4bc6b       44a6d50ef170d       53 seconds ago      Running             kube-apiserver              0                   3a5be5205d51f
	ed91afd79a1d0       05b738aa1bc63       53 seconds ago      Running             etcd                        0                   72070b39b4062
	293dfb27a1bc6       cb310ff289d79       53 seconds ago      Running             kube-controller-manager     0                   79b3b926c5f79
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 03:01:44 UTC, end at Tue 2021-08-17 03:07:41 UTC. --
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.757799740Z" level=info msg="StopContainer for \"124c49be3fc0f0c4acc555807fd802e5e0dc6a95b32b84c1576b8ec395b0d8a3\" returns successfully"
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.758184302Z" level=info msg="StopPodSandbox for \"93ba703241bd4c9fa69dcaeeac49325a5cd82015d74fdbd26df7140d300451b1\""
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.758248031Z" level=info msg="Container to stop \"124c49be3fc0f0c4acc555807fd802e5e0dc6a95b32b84c1576b8ec395b0d8a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.772014632Z" level=info msg="TaskExit event &TaskExit{ContainerID:93ba703241bd4c9fa69dcaeeac49325a5cd82015d74fdbd26df7140d300451b1,ID:93ba703241bd4c9fa69dcaeeac49325a5cd82015d74fdbd26df7140d300451b1,Pid:5109,ExitStatus:137,ExitedAt:2021-08-17 03:07:21.77186663 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.795017322Z" level=info msg="shim disconnected" id=93ba703241bd4c9fa69dcaeeac49325a5cd82015d74fdbd26df7140d300451b1
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.795072239Z" level=error msg="copy shim log" error="read /proc/self/fd/77: file already closed"
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.824946570Z" level=info msg="TearDown network for sandbox \"93ba703241bd4c9fa69dcaeeac49325a5cd82015d74fdbd26df7140d300451b1\" successfully"
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.824976978Z" level=info msg="StopPodSandbox for \"93ba703241bd4c9fa69dcaeeac49325a5cd82015d74fdbd26df7140d300451b1\" returns successfully"
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.251473038Z" level=info msg="CreateContainer within sandbox \"6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.253569106Z" level=info msg="RemoveContainer for \"124c49be3fc0f0c4acc555807fd802e5e0dc6a95b32b84c1576b8ec395b0d8a3\""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.270691939Z" level=info msg="RemoveContainer for \"124c49be3fc0f0c4acc555807fd802e5e0dc6a95b32b84c1576b8ec395b0d8a3\" returns successfully"
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.280454554Z" level=info msg="CreateContainer within sandbox \"6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a\""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.284109605Z" level=error msg="ContainerStatus for \"124c49be3fc0f0c4acc555807fd802e5e0dc6a95b32b84c1576b8ec395b0d8a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"124c49be3fc0f0c4acc555807fd802e5e0dc6a95b32b84c1576b8ec395b0d8a3\": not found"
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.284171266Z" level=info msg="StartContainer for \"104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a\""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.355718944Z" level=info msg="Finish piping stderr of container \"104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a\""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.355793733Z" level=info msg="Finish piping stdout of container \"104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a\""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.360330605Z" level=info msg="StartContainer for \"104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a\" returns successfully"
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.360416052Z" level=info msg="TaskExit event &TaskExit{ContainerID:104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a,ID:104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a,Pid:6194,ExitStatus:1,ExitedAt:2021-08-17 03:07:22.357262272 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.434625282Z" level=info msg="shim disconnected" id=104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.434780071Z" level=error msg="copy shim log" error="read /proc/self/fd/99: file already closed"
	Aug 17 03:07:23 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:23.257450513Z" level=info msg="RemoveContainer for \"1997fae2ba8dd57148b00caa95caf58f3dd2dd4a8f8fb73d023d7146823bcd1a\""
	Aug 17 03:07:23 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:23.262189048Z" level=info msg="RemoveContainer for \"1997fae2ba8dd57148b00caa95caf58f3dd2dd4a8f8fb73d023d7146823bcd1a\" returns successfully"
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:28.164761565Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:28.169415917Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:28.172218825Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	
	* 
	* ==> coredns [20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20210817025908-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-20210817025908-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=embed-certs-20210817025908-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T03_06_58_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 03:06:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20210817025908-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 03:07:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 03:07:36 +0000   Tue, 17 Aug 2021 03:06:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 03:07:36 +0000   Tue, 17 Aug 2021 03:06:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 03:07:36 +0000   Tue, 17 Aug 2021 03:06:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 03:07:36 +0000   Tue, 17 Aug 2021 03:07:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    embed-certs-20210817025908-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                6441d343-a3e8-41b6-b426-f9fef981de1d
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-cstpw                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     29s
	  kube-system                 etcd-embed-certs-20210817025908-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         35s
	  kube-system                 kindnet-cnwp8                                                 100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      30s
	  kube-system                 kube-apiserver-embed-certs-20210817025908-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-controller-manager-embed-certs-20210817025908-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-proxy-cjs8q                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-scheduler-embed-certs-20210817025908-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 metrics-server-7c784ccb57-48wrs                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (3%!)(MISSING)       0 (0%!)(MISSING)         27s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-gdlf6                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-wrgrg                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             520Mi (6%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  55s (x5 over 55s)  kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x4 over 55s)  kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x4 over 55s)  kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeHasSufficientPID
	  Normal  Starting                 36s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s                kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s                kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s                kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                30s                kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeReady
	  Normal  Starting                 28s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> etcd [ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445] <==
	* 2021-08-17 03:06:47.465645 W | auth: simple token is not cryptographically signed
	2021-08-17 03:06:47.483862 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-17 03:06:47.484214 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/17 03:06:47 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 03:06:47.484589 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-17 03:06:47.486877 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 03:06:47.486979 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-17 03:06:47.487149 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/17 03:06:47 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 03:06:47 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 03:06:47 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 03:06:47 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 03:06:47 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 03:06:47.872537 I | etcdserver: published {Name:embed-certs-20210817025908-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 03:06:47.872661 I | embed: ready to serve client requests
	2021-08-17 03:06:47.874159 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 03:06:47.874369 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 03:06:47.874585 I | embed: ready to serve client requests
	2021-08-17 03:06:47.874877 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 03:06:47.874936 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 03:06:47.876101 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 03:07:09.768120 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 03:07:13.536783 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 03:07:23.535879 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 03:07:33.536475 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  03:07:41 up 10:49,  0 users,  load average: 2.10, 1.46, 1.52
	Linux embed-certs-20210817025908-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf] <==
	* I0817 03:06:54.810683       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0817 03:06:54.890229       1 controller.go:611] quota admission added evaluator for: namespaces
	I0817 03:06:55.475186       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 03:06:55.475364       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 03:06:55.480083       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0817 03:06:55.482827       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0817 03:06:55.482848       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 03:06:55.908450       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 03:06:55.948487       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 03:06:56.070802       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0817 03:06:56.071935       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 03:06:56.075413       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 03:06:56.570887       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 03:06:57.211421       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 03:06:57.506639       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 03:06:57.541978       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 03:07:11.644117       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0817 03:07:12.005729       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W0817 03:07:17.216518       1 handler_proxy.go:102] no RequestInfo found in the context
	E0817 03:07:17.216578       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 03:07:17.216585       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 03:07:26.025049       1 client.go:360] parsed scheme: "passthrough"
	I0817 03:07:26.025089       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 03:07:26.025221       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095] <==
	* I0817 03:07:14.945064       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0817 03:07:14.984614       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0817 03:07:14.985230       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	I0817 03:07:14.992388       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-48wrs"
	I0817 03:07:15.355139       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0817 03:07:15.375358       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0817 03:07:15.394406       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:07:15.394437       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:07:15.405651       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:07:15.405746       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:07:15.439796       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:07:15.440142       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:07:15.440482       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:07:15.440499       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:07:15.453265       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:07:15.453958       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:07:15.454110       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:07:15.454205       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:07:15.468546       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:07:15.468717       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:07:15.472686       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:07:15.472870       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:07:15.503469       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-gdlf6"
	I0817 03:07:15.503815       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-wrgrg"
	I0817 03:07:16.064919       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9] <==
	* I0817 03:07:13.696074       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 03:07:13.696114       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 03:07:13.696133       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 03:07:13.721438       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 03:07:13.721467       1 server_others.go:212] Using iptables Proxier.
	I0817 03:07:13.721480       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 03:07:13.721490       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 03:07:13.721750       1 server.go:643] Version: v1.21.3
	I0817 03:07:13.722292       1 config.go:315] Starting service config controller
	I0817 03:07:13.722307       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 03:07:13.722976       1 config.go:224] Starting endpoint slice config controller
	I0817 03:07:13.723000       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 03:07:13.728562       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 03:07:13.801992       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 03:07:13.822942       1 shared_informer.go:247] Caches are synced for service config 
	I0817 03:07:13.826983       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb] <==
	* W0817 03:06:54.683767       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0817 03:06:54.683945       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 03:06:54.684042       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 03:06:54.684108       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 03:06:54.736843       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0817 03:06:54.759079       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 03:06:54.771769       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 03:06:54.771907       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 03:06:54.823892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 03:06:54.824116       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 03:06:54.824293       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 03:06:54.824475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 03:06:54.824654       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 03:06:54.829806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 03:06:54.830040       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 03:06:54.830230       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 03:06:54.830432       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 03:06:54.830627       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 03:06:54.830835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 03:06:54.831013       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 03:06:54.831221       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 03:06:54.836466       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 03:06:55.666467       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 03:06:55.667141       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0817 03:06:56.472233       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 03:01:44 UTC, end at Tue 2021-08-17 03:07:41 UTC. --
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:22.488635    4620 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43d065c7-802f-4efb-a8de-757cd1544054-config-volume\") pod \"43d065c7-802f-4efb-a8de-757cd1544054\" (UID: \"43d065c7-802f-4efb-a8de-757cd1544054\") "
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:22.488702    4620 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gns2m\" (UniqueName: \"kubernetes.io/projected/43d065c7-802f-4efb-a8de-757cd1544054-kube-api-access-gns2m\") pod \"43d065c7-802f-4efb-a8de-757cd1544054\" (UID: \"43d065c7-802f-4efb-a8de-757cd1544054\") "
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: W0817 03:07:22.489716    4620 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/43d065c7-802f-4efb-a8de-757cd1544054/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:22.489856    4620 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43d065c7-802f-4efb-a8de-757cd1544054-config-volume" (OuterVolumeSpecName: "config-volume") pod "43d065c7-802f-4efb-a8de-757cd1544054" (UID: "43d065c7-802f-4efb-a8de-757cd1544054"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:22.493418    4620 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d065c7-802f-4efb-a8de-757cd1544054-kube-api-access-gns2m" (OuterVolumeSpecName: "kube-api-access-gns2m") pod "43d065c7-802f-4efb-a8de-757cd1544054" (UID: "43d065c7-802f-4efb-a8de-757cd1544054"). InnerVolumeSpecName "kube-api-access-gns2m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:22.589722    4620 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43d065c7-802f-4efb-a8de-757cd1544054-config-volume\") on node \"embed-certs-20210817025908-1554185\" DevicePath \"\""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:22.589760    4620 reconciler.go:319] "Volume detached for volume \"kube-api-access-gns2m\" (UniqueName: \"kubernetes.io/projected/43d065c7-802f-4efb-a8de-757cd1544054-kube-api-access-gns2m\") on node \"embed-certs-20210817025908-1554185\" DevicePath \"\""
	Aug 17 03:07:23 embed-certs-20210817025908-1554185 kubelet[4620]: W0817 03:07:23.032258    4620 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod2ecf1109-a9f9-4504-a5ac-e2dd767aa611/1997fae2ba8dd57148b00caa95caf58f3dd2dd4a8f8fb73d023d7146823bcd1a WatchSource:0}: task 1997fae2ba8dd57148b00caa95caf58f3dd2dd4a8f8fb73d023d7146823bcd1a not found: not found
	Aug 17 03:07:23 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:23.255765    4620 scope.go:111] "RemoveContainer" containerID="1997fae2ba8dd57148b00caa95caf58f3dd2dd4a8f8fb73d023d7146823bcd1a"
	Aug 17 03:07:23 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:23.256083    4620 scope.go:111] "RemoveContainer" containerID="104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a"
	Aug 17 03:07:23 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:23.256356    4620 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gdlf6_kubernetes-dashboard(2ecf1109-a9f9-4504-a5ac-e2dd767aa611)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gdlf6" podUID=2ecf1109-a9f9-4504-a5ac-e2dd767aa611
	Aug 17 03:07:24 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:24.258670    4620 scope.go:111] "RemoveContainer" containerID="104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a"
	Aug 17 03:07:24 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:24.258988    4620 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gdlf6_kubernetes-dashboard(2ecf1109-a9f9-4504-a5ac-e2dd767aa611)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gdlf6" podUID=2ecf1109-a9f9-4504-a5ac-e2dd767aa611
	Aug 17 03:07:24 embed-certs-20210817025908-1554185 kubelet[4620]: W0817 03:07:24.537147    4620 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod2ecf1109-a9f9-4504-a5ac-e2dd767aa611/104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a WatchSource:0}: task 104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a not found: not found
	Aug 17 03:07:26 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:26.425591    4620 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pod1573fd26-713e-4757-9c30-cdb6f8181a96\": RecentStats: unable to find data in memory cache], [\"/kubepods/besteffort/pod09ad25fa-17f2-48b6-b8fc-fe277ad894a1\": RecentStats: unable to find data in memory cache]"
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:28.172390    4620 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:28.172447    4620 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:28.172573    4620 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sm4xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Hand
ler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[
]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-48wrs_kube-system(1573fd26-713e-4757-9c30-cdb6f8181a96): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:28.172630    4620 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-7c784ccb57-48wrs" podUID=1573fd26-713e-4757-9c30-cdb6f8181a96
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:28.718906    4620 scope.go:111] "RemoveContainer" containerID="104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a"
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:28.719376    4620 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gdlf6_kubernetes-dashboard(2ecf1109-a9f9-4504-a5ac-e2dd767aa611)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gdlf6" podUID=2ecf1109-a9f9-4504-a5ac-e2dd767aa611
	Aug 17 03:07:38 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:38.647870    4620 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 17 03:07:38 embed-certs-20210817025908-1554185 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 17 03:07:38 embed-certs-20210817025908-1554185 systemd[1]: kubelet.service: Succeeded.
	Aug 17 03:07:38 embed-certs-20210817025908-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2] <==
	* 2021/08/17 03:07:16 Using namespace: kubernetes-dashboard
	2021/08/17 03:07:16 Using in-cluster config to connect to apiserver
	2021/08/17 03:07:16 Using secret token for csrf signing
	2021/08/17 03:07:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/17 03:07:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/17 03:07:16 Successful initial request to the apiserver, version: v1.21.3
	2021/08/17 03:07:16 Generating JWE encryption key
	2021/08/17 03:07:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/17 03:07:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/17 03:07:17 Initializing JWE encryption key from synchronized object
	2021/08/17 03:07:17 Creating in-cluster Sidecar client
	2021/08/17 03:07:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/17 03:07:17 Serving insecurely on HTTP port: 9090
	2021/08/17 03:07:16 Starting overwatch
	
	* 
	* ==> storage-provisioner [7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a] <==
	* I0817 03:07:16.237266       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 03:07:16.255013       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 03:07:16.255065       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 03:07:16.261589       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 03:07:16.261848       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210817025908-1554185_efd4082b-b30e-41ef-b4db-2d7c4933d659!
	I0817 03:07:16.262901       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c7fdbca1-fdf4-47fe-a32b-419df177bb7c", APIVersion:"v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210817025908-1554185_efd4082b-b30e-41ef-b4db-2d7c4933d659 became leader
	I0817 03:07:16.362974       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210817025908-1554185_efd4082b-b30e-41ef-b4db-2d7c4933d659!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-20210817025908-1554185 -n embed-certs-20210817025908-1554185
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-20210817025908-1554185 -n embed-certs-20210817025908-1554185: exit status 2 (335.400644ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20210817025908-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-48wrs
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20210817025908-1554185 describe pod metrics-server-7c784ccb57-48wrs
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20210817025908-1554185 describe pod metrics-server-7c784ccb57-48wrs: exit status 1 (78.419821ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-48wrs" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20210817025908-1554185 describe pod metrics-server-7c784ccb57-48wrs: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210817025908-1554185
helpers_test.go:236: (dbg) docker inspect embed-certs-20210817025908-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8eaed548049df08f308d0f55517067ceff943271942ff3108ad4ef71f7217982",
	        "Created": "2021-08-17T02:59:10.017105184Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1709655,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T03:01:43.68006688Z",
	            "FinishedAt": "2021-08-17T03:01:42.402103208Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/8eaed548049df08f308d0f55517067ceff943271942ff3108ad4ef71f7217982/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8eaed548049df08f308d0f55517067ceff943271942ff3108ad4ef71f7217982/hostname",
	        "HostsPath": "/var/lib/docker/containers/8eaed548049df08f308d0f55517067ceff943271942ff3108ad4ef71f7217982/hosts",
	        "LogPath": "/var/lib/docker/containers/8eaed548049df08f308d0f55517067ceff943271942ff3108ad4ef71f7217982/8eaed548049df08f308d0f55517067ceff943271942ff3108ad4ef71f7217982-json.log",
	        "Name": "/embed-certs-20210817025908-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210817025908-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210817025908-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/637b8df46cb9d9449bfdfddfb16834a2df92a9981b6c328fe54de322826b7b99-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/637b8df46cb9d9449bfdfddfb16834a2df92a9981b6c328fe54de322826b7b99/merged",
	                "UpperDir": "/var/lib/docker/overlay2/637b8df46cb9d9449bfdfddfb16834a2df92a9981b6c328fe54de322826b7b99/diff",
	                "WorkDir": "/var/lib/docker/overlay2/637b8df46cb9d9449bfdfddfb16834a2df92a9981b6c328fe54de322826b7b99/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210817025908-1554185",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210817025908-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210817025908-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210817025908-1554185",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210817025908-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484ca8a5cfea5268a6896565c7b3a9ff84020fcd7153dc5b7c56e4bc38e80c1e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50483"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50482"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50479"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50481"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50480"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/484ca8a5cfea",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210817025908-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8eaed548049d",
	                        "embed-certs-20210817025908-1554185"
	                    ],
	                    "NetworkID": "05d569d1a6658b6ba8512401795e744ed2f9e1daa9e68f59cf931f36c4b889a3",
	                    "EndpointID": "99671e92ebfaade2f4af3980878ad3f2bd130905340becc7a1b5b24ea2e7cc75",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210817025908-1554185 -n embed-certs-20210817025908-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210817025908-1554185 -n embed-certs-20210817025908-1554185: exit status 2 (342.834536ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-20210817025908-1554185 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:253: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                      Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | force-systemd-flag-20210817024631-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:25 UTC | Tue, 17 Aug 2021 02:47:28 UTC |
	|         | force-systemd-flag-20210817024631-1554185         |                                                   |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210817024307-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:20 UTC | Tue, 17 Aug 2021 02:48:02 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185         |                                                   |         |         |                               |                               |
	|         | --memory=2200                                     |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210817024307-1554185         | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:02 UTC | Tue, 17 Aug 2021 02:48:05 UTC |
	|         | kubernetes-upgrade-20210817024307-1554185         |                                                   |         |         |                               |                               |
	| start   | -p                                                | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:47:28 UTC | Tue, 17 Aug 2021 02:48:49 UTC |
	|         | cert-options-20210817024728-1554185               |                                                   |         |         |                               |                               |
	|         | --memory=2048                                     |                                                   |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                                   |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                                   |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                                   |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	| -p      | cert-options-20210817024728-1554185               | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:49 UTC | Tue, 17 Aug 2021 02:48:49 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                                   |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                                   |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210817024728-1554185               | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:49 UTC | Tue, 17 Aug 2021 02:48:52 UTC |
	|         | cert-options-20210817024728-1554185               |                                                   |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:05 UTC | Tue, 17 Aug 2021 02:50:20 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                   |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                   |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                   |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:29 UTC | Tue, 17 Aug 2021 02:50:29 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:30 UTC | Tue, 17 Aug 2021 02:50:50 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210817024805-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:50:50 UTC | Tue, 17 Aug 2021 02:50:50 UTC |
	|         | old-k8s-version-20210817024805-1554185            |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:52 UTC | Tue, 17 Aug 2021 02:50:54 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:02 UTC | Tue, 17 Aug 2021 02:51:03 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:03 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:57:04 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:57:15 UTC | Tue, 17 Aug 2021 02:57:15 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:05 UTC | Tue, 17 Aug 2021 02:59:08 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:08 UTC | Tue, 17 Aug 2021 02:59:08 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:08 UTC | Tue, 17 Aug 2021 03:01:12 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:21 UTC | Tue, 17 Aug 2021 03:01:22 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:22 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:07:27 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:38 UTC | Tue, 17 Aug 2021 03:07:38 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	| -p      | embed-certs-20210817025908-1554185                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:40 UTC | Tue, 17 Aug 2021 03:07:41 UTC |
	|         | logs -n 25                                        |                                                   |         |         |                               |                               |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 03:01:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 03:01:42.915636 1709430 out.go:298] Setting OutFile to fd 1 ...
	I0817 03:01:42.915815 1709430 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:01:42.915825 1709430 out.go:311] Setting ErrFile to fd 2...
	I0817 03:01:42.915829 1709430 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:01:42.915955 1709430 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 03:01:42.916188 1709430 out.go:305] Setting JSON to false
	I0817 03:01:42.917110 1709430 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38641,"bootTime":1629130662,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 03:01:42.917187 1709430 start.go:121] virtualization:  
	I0817 03:01:42.919362 1709430 out.go:177] * [embed-certs-20210817025908-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 03:01:42.920883 1709430 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 03:01:42.919510 1709430 notify.go:169] Checking for updates...
	I0817 03:01:42.922656 1709430 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:01:42.924352 1709430 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 03:01:42.926083 1709430 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 03:01:42.926489 1709430 config.go:177] Loaded profile config "embed-certs-20210817025908-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 03:01:42.926938 1709430 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 03:01:42.966220 1709430 docker.go:132] docker version: linux-20.10.8
	I0817 03:01:42.966292 1709430 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:01:43.109734 1709430 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:01:43.035488435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 03:01:43.109885 1709430 docker.go:244] overlay module found
	I0817 03:01:43.112560 1709430 out.go:177] * Using the docker driver based on existing profile
	I0817 03:01:43.112580 1709430 start.go:278] selected driver: docker
	I0817 03:01:43.112586 1709430 start.go:751] validating driver "docker" against &{Name:embed-certs-20210817025908-1554185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210817025908-1554185 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout
:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:01:43.112704 1709430 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 03:01:43.112741 1709430 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:01:43.112750 1709430 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:01:43.113917 1709430 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:01:43.114457 1709430 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:01:43.240185 1709430 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:01:43.161496688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	W0817 03:01:43.240305 1709430 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:01:43.240324 1709430 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:01:43.242084 1709430 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:01:43.242179 1709430 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 03:01:43.242202 1709430 cni.go:93] Creating CNI manager for ""
	I0817 03:01:43.242210 1709430 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:01:43.242235 1709430 start_flags.go:277] config:
	{Name:embed-certs-20210817025908-1554185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210817025908-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeReque
sted:false ExtraDisks:0}
	I0817 03:01:43.244150 1709430 out.go:177] * Starting control plane node embed-certs-20210817025908-1554185 in cluster embed-certs-20210817025908-1554185
	I0817 03:01:43.244175 1709430 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 03:01:43.245724 1709430 out.go:177] * Pulling base image ...
	I0817 03:01:43.245741 1709430 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 03:01:43.245775 1709430 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 03:01:43.245783 1709430 cache.go:56] Caching tarball of preloaded images
	I0817 03:01:43.245933 1709430 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 03:01:43.245947 1709430 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 03:01:43.246055 1709430 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/config.json ...
	I0817 03:01:43.246214 1709430 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 03:01:43.302552 1709430 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 03:01:43.302578 1709430 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 03:01:43.302588 1709430 cache.go:205] Successfully downloaded all kic artifacts
	I0817 03:01:43.302624 1709430 start.go:313] acquiring machines lock for embed-certs-20210817025908-1554185: {Name:mkc8f6524c9d90ccbc42094864dd90d7c2463223 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:01:43.302708 1709430 start.go:317] acquired machines lock for "embed-certs-20210817025908-1554185" in 58.248µs
	I0817 03:01:43.302730 1709430 start.go:93] Skipping create...Using existing machine configuration
	I0817 03:01:43.302735 1709430 fix.go:55] fixHost starting: 
	I0817 03:01:43.303098 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:01:43.333163 1709430 fix.go:108] recreateIfNeeded on embed-certs-20210817025908-1554185: state=Stopped err=<nil>
	W0817 03:01:43.333191 1709430 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 03:01:43.335145 1709430 out.go:177] * Restarting existing docker container for "embed-certs-20210817025908-1554185" ...
	I0817 03:01:43.335200 1709430 cli_runner.go:115] Run: docker start embed-certs-20210817025908-1554185
	I0817 03:01:43.688530 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:01:43.732399 1709430 kic.go:420] container "embed-certs-20210817025908-1554185" state is running.
	I0817 03:01:43.732746 1709430 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210817025908-1554185
	I0817 03:01:43.780842 1709430 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/config.json ...
	I0817 03:01:43.781022 1709430 machine.go:88] provisioning docker machine ...
	I0817 03:01:43.781036 1709430 ubuntu.go:169] provisioning hostname "embed-certs-20210817025908-1554185"
	I0817 03:01:43.781081 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:43.818557 1709430 main.go:130] libmachine: Using SSH client type: native
	I0817 03:01:43.819051 1709430 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50483 <nil> <nil>}
	I0817 03:01:43.819126 1709430 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210817025908-1554185 && echo "embed-certs-20210817025908-1554185" | sudo tee /etc/hostname
	I0817 03:01:43.819693 1709430 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43842->127.0.0.1:50483: read: connection reset by peer
	I0817 03:01:46.941429 1709430 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210817025908-1554185
	
	I0817 03:01:46.941509 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:46.973475 1709430 main.go:130] libmachine: Using SSH client type: native
	I0817 03:01:46.973643 1709430 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50483 <nil> <nil>}
	I0817 03:01:46.973672 1709430 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210817025908-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210817025908-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210817025908-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 03:01:47.098196 1709430 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 03:01:47.098263 1709430 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 03:01:47.098302 1709430 ubuntu.go:177] setting up certificates
	I0817 03:01:47.098337 1709430 provision.go:83] configureAuth start
	I0817 03:01:47.098419 1709430 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210817025908-1554185
	I0817 03:01:47.140418 1709430 provision.go:138] copyHostCerts
	I0817 03:01:47.140475 1709430 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 03:01:47.140490 1709430 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 03:01:47.140552 1709430 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 03:01:47.140638 1709430 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 03:01:47.140647 1709430 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 03:01:47.140669 1709430 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 03:01:47.140724 1709430 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 03:01:47.140732 1709430 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 03:01:47.140752 1709430 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 03:01:47.140796 1709430 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210817025908-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210817025908-1554185]
	I0817 03:01:47.563754 1709430 provision.go:172] copyRemoteCerts
	I0817 03:01:47.563839 1709430 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 03:01:47.563897 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:47.594589 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:47.676748 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 03:01:47.691618 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0817 03:01:47.707188 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 03:01:47.721435 1709430 provision.go:86] duration metric: configureAuth took 623.075101ms
	I0817 03:01:47.721456 1709430 ubuntu.go:193] setting minikube options for container-runtime
	I0817 03:01:47.721620 1709430 config.go:177] Loaded profile config "embed-certs-20210817025908-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 03:01:47.721633 1709430 machine.go:91] provisioned docker machine in 3.94060428s
	I0817 03:01:47.721640 1709430 start.go:267] post-start starting for "embed-certs-20210817025908-1554185" (driver="docker")
	I0817 03:01:47.721653 1709430 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 03:01:47.721699 1709430 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 03:01:47.721738 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:47.753024 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:47.836811 1709430 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 03:01:47.839115 1709430 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 03:01:47.839138 1709430 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 03:01:47.839151 1709430 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 03:01:47.839156 1709430 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 03:01:47.839164 1709430 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 03:01:47.839207 1709430 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 03:01:47.839292 1709430 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 03:01:47.839383 1709430 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 03:01:47.845028 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:01:47.859892 1709430 start.go:270] post-start completed in 138.235488ms
	I0817 03:01:47.862563 1709430 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 03:01:47.862604 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:47.895396 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:47.981879 1709430 fix.go:57] fixHost completed within 4.679139366s
	I0817 03:01:47.981902 1709430 start.go:80] releasing machines lock for "embed-certs-20210817025908-1554185", held for 4.679182122s
	I0817 03:01:47.981973 1709430 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210817025908-1554185
	I0817 03:01:48.018361 1709430 ssh_runner.go:149] Run: systemctl --version
	I0817 03:01:48.018413 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:48.018620 1709430 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 03:01:48.018669 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:01:48.084825 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:48.109792 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:01:48.182191 1709430 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 03:01:48.477295 1709430 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 03:01:48.486407 1709430 docker.go:153] disabling docker service ...
	I0817 03:01:48.486452 1709430 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 03:01:48.495395 1709430 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 03:01:48.503277 1709430 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 03:01:48.574458 1709430 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 03:01:48.650525 1709430 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 03:01:48.658005 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 03:01:48.668723 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 03:01:48.680039 1709430 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 03:01:48.685607 1709430 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 03:01:48.691075 1709430 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 03:01:48.770865 1709430 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 03:01:48.856461 1709430 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 03:01:48.856555 1709430 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 03:01:48.860033 1709430 start.go:413] Will wait 60s for crictl version
	I0817 03:01:48.860113 1709430 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:01:48.885394 1709430 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T03:01:48Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 03:01:59.932195 1709430 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:01:59.954055 1709430 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 03:01:59.954117 1709430 ssh_runner.go:149] Run: containerd --version
	I0817 03:01:59.974914 1709430 ssh_runner.go:149] Run: containerd --version
	I0817 03:01:59.996782 1709430 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 03:01:59.996854 1709430 cli_runner.go:115] Run: docker network inspect embed-certs-20210817025908-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 03:02:00.034307 1709430 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 03:02:00.037446 1709430 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:02:00.046058 1709430 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 03:02:00.046122 1709430 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:02:00.081340 1709430 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:02:00.081357 1709430 containerd.go:517] Images already preloaded, skipping extraction
	I0817 03:02:00.081401 1709430 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:02:00.108655 1709430 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:02:00.108676 1709430 cache_images.go:74] Images are preloaded, skipping loading
	I0817 03:02:00.108741 1709430 ssh_runner.go:149] Run: sudo crictl info
	I0817 03:02:00.143555 1709430 cni.go:93] Creating CNI manager for ""
	I0817 03:02:00.143577 1709430 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:02:00.143588 1709430 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 03:02:00.143605 1709430 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210817025908-1554185 NodeName:embed-certs-20210817025908-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 03:02:00.143742 1709430 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20210817025908-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 03:02:00.143826 1709430 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20210817025908-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210817025908-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 03:02:00.143885 1709430 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 03:02:00.151550 1709430 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 03:02:00.151608 1709430 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 03:02:00.158110 1709430 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (579 bytes)
	I0817 03:02:00.172909 1709430 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 03:02:00.185604 1709430 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0817 03:02:00.198148 1709430 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 03:02:00.202587 1709430 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:02:00.211935 1709430 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185 for IP: 192.168.49.2
	I0817 03:02:00.211985 1709430 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 03:02:00.212005 1709430 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 03:02:00.212058 1709430 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/client.key
	I0817 03:02:00.212079 1709430 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/apiserver.key.dd3b5fb2
	I0817 03:02:00.212099 1709430 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/proxy-client.key
	I0817 03:02:00.212189 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 03:02:00.212226 1709430 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 03:02:00.212240 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 03:02:00.212263 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 03:02:00.212302 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 03:02:00.212327 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 03:02:00.212374 1709430 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:02:00.213402 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 03:02:00.233903 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 03:02:00.257339 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 03:02:00.272567 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210817025908-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 03:02:00.287332 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 03:02:00.303591 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 03:02:00.323416 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 03:02:00.338181 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 03:02:00.352831 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 03:02:00.367365 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 03:02:00.381902 1709430 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 03:02:00.396438 1709430 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 03:02:00.407669 1709430 ssh_runner.go:149] Run: openssl version
	I0817 03:02:00.411901 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 03:02:00.417999 1709430 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 03:02:00.420591 1709430 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 03:02:00.420649 1709430 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 03:02:00.424886 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 03:02:00.430590 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 03:02:00.436691 1709430 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:02:00.439330 1709430 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:02:00.439385 1709430 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:02:00.443503 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 03:02:00.449205 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 03:02:00.455268 1709430 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 03:02:00.457833 1709430 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 03:02:00.457897 1709430 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 03:02:00.462009 1709430 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 03:02:00.467819 1709430 kubeadm.go:390] StartCluster: {Name:embed-certs-20210817025908-1554185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210817025908-1554185 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:
<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:02:00.467915 1709430 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 03:02:00.467970 1709430 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:02:00.490420 1709430 cri.go:76] found id: "3819e01cd0f969187e57a4fc98222a3ea0a77d2454e68ae4ff98e41dcc7b0dc0"
	I0817 03:02:00.490438 1709430 cri.go:76] found id: "147743153867e7bf2b79e1d98cdc96505cf6c5e444c638a5408425fb70ca87ab"
	I0817 03:02:00.490443 1709430 cri.go:76] found id: "45a6ad2bca9734848b2137f94e723e008ca4f9cc5d608fa41c9369f26199484b"
	I0817 03:02:00.490448 1709430 cri.go:76] found id: "26d57a9cd98538e0290ecfdecf0aa55fda8c4c1e494d7a693872b191c1e219ee"
	I0817 03:02:00.490452 1709430 cri.go:76] found id: "b83c757a97acfc28207d3b12c22e7e51480e2b318d6e31cd05d9fb8ba40be727"
	I0817 03:02:00.490457 1709430 cri.go:76] found id: "c587ada33b489b69e8309fa4e6da6008b66f1115ea3dfaf148d54dc4fd5e0d3e"
	I0817 03:02:00.490463 1709430 cri.go:76] found id: "c1a9cf0a32385766092ed94ffabdd1ca185dd517336d62b9acc0fa629e0e2775"
	I0817 03:02:00.490468 1709430 cri.go:76] found id: "43637e1ccb03e97bf246cf21f7f40e791e999d191b70bb361f253d5e28f7711e"
	I0817 03:02:00.490478 1709430 cri.go:76] found id: ""
	I0817 03:02:00.490512 1709430 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:02:00.502926 1709430 cri.go:103] JSON = null
	W0817 03:02:00.502962 1709430 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0817 03:02:00.503016 1709430 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 03:02:00.508722 1709430 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 03:02:00.508744 1709430 kubeadm.go:600] restartCluster start
	I0817 03:02:00.508777 1709430 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 03:02:00.514130 1709430 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:00.514999 1709430 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210817025908-1554185" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:02:00.515236 1709430 kubeconfig.go:128] "embed-certs-20210817025908-1554185" context is missing from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0817 03:02:00.515752 1709430 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:02:00.517932 1709430 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 03:02:00.523411 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:00.523458 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:00.532371 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:00.732714 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:00.732781 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:00.741454 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:00.932728 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:00.932776 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:00.941590 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.132831 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.132935 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.143133 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.333433 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.333504 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.342847 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.533149 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.533202 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.542098 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.733346 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.733423 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.742171 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:01.933424 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:01.933503 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:01.942215 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.133501 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.133589 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.144077 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.333428 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.333518 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.342978 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.533200 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.533285 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.542303 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.732496 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.732541 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.741347 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:02.932764 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:02.932815 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:02.941561 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.132828 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:03.132954 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:03.145350 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.332600 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:03.332663 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:03.341975 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.533200 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:03.533260 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:03.542160 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.542171 1709430 api_server.go:164] Checking apiserver status ...
	I0817 03:02:03.542205 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:02:03.550805 1709430 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.550831 1709430 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0817 03:02:03.550837 1709430 kubeadm.go:1032] stopping kube-system containers ...
	I0817 03:02:03.550848 1709430 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 03:02:03.550890 1709430 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:02:03.573081 1709430 cri.go:76] found id: "3819e01cd0f969187e57a4fc98222a3ea0a77d2454e68ae4ff98e41dcc7b0dc0"
	I0817 03:02:03.573100 1709430 cri.go:76] found id: "147743153867e7bf2b79e1d98cdc96505cf6c5e444c638a5408425fb70ca87ab"
	I0817 03:02:03.573105 1709430 cri.go:76] found id: "45a6ad2bca9734848b2137f94e723e008ca4f9cc5d608fa41c9369f26199484b"
	I0817 03:02:03.573110 1709430 cri.go:76] found id: "26d57a9cd98538e0290ecfdecf0aa55fda8c4c1e494d7a693872b191c1e219ee"
	I0817 03:02:03.573115 1709430 cri.go:76] found id: "b83c757a97acfc28207d3b12c22e7e51480e2b318d6e31cd05d9fb8ba40be727"
	I0817 03:02:03.573120 1709430 cri.go:76] found id: "c587ada33b489b69e8309fa4e6da6008b66f1115ea3dfaf148d54dc4fd5e0d3e"
	I0817 03:02:03.573125 1709430 cri.go:76] found id: "c1a9cf0a32385766092ed94ffabdd1ca185dd517336d62b9acc0fa629e0e2775"
	I0817 03:02:03.573129 1709430 cri.go:76] found id: "43637e1ccb03e97bf246cf21f7f40e791e999d191b70bb361f253d5e28f7711e"
	I0817 03:02:03.573133 1709430 cri.go:76] found id: ""
	I0817 03:02:03.573138 1709430 cri.go:221] Stopping containers: [3819e01cd0f969187e57a4fc98222a3ea0a77d2454e68ae4ff98e41dcc7b0dc0 147743153867e7bf2b79e1d98cdc96505cf6c5e444c638a5408425fb70ca87ab 45a6ad2bca9734848b2137f94e723e008ca4f9cc5d608fa41c9369f26199484b 26d57a9cd98538e0290ecfdecf0aa55fda8c4c1e494d7a693872b191c1e219ee b83c757a97acfc28207d3b12c22e7e51480e2b318d6e31cd05d9fb8ba40be727 c587ada33b489b69e8309fa4e6da6008b66f1115ea3dfaf148d54dc4fd5e0d3e c1a9cf0a32385766092ed94ffabdd1ca185dd517336d62b9acc0fa629e0e2775 43637e1ccb03e97bf246cf21f7f40e791e999d191b70bb361f253d5e28f7711e]
	I0817 03:02:03.573180 1709430 ssh_runner.go:149] Run: which crictl
	I0817 03:02:03.575701 1709430 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 3819e01cd0f969187e57a4fc98222a3ea0a77d2454e68ae4ff98e41dcc7b0dc0 147743153867e7bf2b79e1d98cdc96505cf6c5e444c638a5408425fb70ca87ab 45a6ad2bca9734848b2137f94e723e008ca4f9cc5d608fa41c9369f26199484b 26d57a9cd98538e0290ecfdecf0aa55fda8c4c1e494d7a693872b191c1e219ee b83c757a97acfc28207d3b12c22e7e51480e2b318d6e31cd05d9fb8ba40be727 c587ada33b489b69e8309fa4e6da6008b66f1115ea3dfaf148d54dc4fd5e0d3e c1a9cf0a32385766092ed94ffabdd1ca185dd517336d62b9acc0fa629e0e2775 43637e1ccb03e97bf246cf21f7f40e791e999d191b70bb361f253d5e28f7711e
	I0817 03:02:03.598212 1709430 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 03:02:03.607086 1709430 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 03:02:03.613074 1709430 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 17 02:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 17 02:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2075 Aug 17 03:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 17 02:59 /etc/kubernetes/scheduler.conf
	
	I0817 03:02:03.613128 1709430 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0817 03:02:03.618914 1709430 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0817 03:02:03.624793 1709430 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0817 03:02:03.630292 1709430 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.630355 1709430 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 03:02:03.635919 1709430 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0817 03:02:03.641434 1709430 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:02:03.641502 1709430 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 03:02:03.646893 1709430 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 03:02:03.652576 1709430 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 03:02:03.652614 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:03.715531 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:05.469382 1709430 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.753788332s)
	I0817 03:02:05.469407 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:05.630841 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:05.737765 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:05.802641 1709430 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:02:05.802701 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:06.312308 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:06.812440 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:07.311850 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:07.811839 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:08.312759 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:08.811837 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:09.311781 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:09.811827 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:10.312505 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:10.812802 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:11.311853 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:11.811838 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:12.312766 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:12.811823 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:13.312571 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:13.812682 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:14.311986 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:14.812787 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:15.312782 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:02:15.350503 1709430 api_server.go:70] duration metric: took 9.547861543s to wait for apiserver process to appear ...
	I0817 03:02:15.350522 1709430 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:02:15.350531 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:20.352792 1709430 api_server.go:255] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 03:02:20.853542 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:22.286600 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 03:02:22.286661 1709430 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 03:02:22.353817 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:22.428645 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 03:02:22.428696 1709430 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 03:02:22.853168 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:22.900944 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:02:22.900974 1709430 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:02:23.353240 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:23.365866 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:02:23.365915 1709430 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:02:23.853552 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:02:23.862728 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:02:23.880276 1709430 api_server.go:139] control plane version: v1.21.3
	I0817 03:02:23.880298 1709430 api_server.go:129] duration metric: took 8.529771373s to wait for apiserver health ...
	I0817 03:02:23.880307 1709430 cni.go:93] Creating CNI manager for ""
	I0817 03:02:23.880320 1709430 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:02:23.882619 1709430 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 03:02:23.882682 1709430 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 03:02:23.887761 1709430 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 03:02:23.887780 1709430 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 03:02:23.901886 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 03:02:24.613061 1709430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:02:24.627170 1709430 system_pods.go:59] 9 kube-system pods found
	I0817 03:02:24.627205 1709430 system_pods.go:61] "coredns-558bd4d5db-dgbzs" [69a5e40e-9bca-4e76-976f-7e87232e2501] Running
	I0817 03:02:24.627214 1709430 system_pods.go:61] "etcd-embed-certs-20210817025908-1554185" [7e3ff9cb-4663-44f8-bdeb-a6851dd56f03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 03:02:24.627235 1709430 system_pods.go:61] "kindnet-6s6ww" [582e5a12-d987-4cc2-b439-264038f7fdec] Running
	I0817 03:02:24.627250 1709430 system_pods.go:61] "kube-apiserver-embed-certs-20210817025908-1554185" [d2b29440-c8bb-4946-99af-a8f6af9d310e] Running
	I0817 03:02:24.627255 1709430 system_pods.go:61] "kube-controller-manager-embed-certs-20210817025908-1554185" [055f695a-0d98-43bb-bf98-4ef9b42a8f36] Running
	I0817 03:02:24.627259 1709430 system_pods.go:61] "kube-proxy-nxbdw" [f0cef6b9-79b0-4944-917c-a3a5d3ac0488] Running
	I0817 03:02:24.627272 1709430 system_pods.go:61] "kube-scheduler-embed-certs-20210817025908-1554185" [fc3d4c1d-1efb-47ec-bf4d-3b8f51f07643] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 03:02:24.627281 1709430 system_pods.go:61] "metrics-server-7c784ccb57-7snbh" [1e2242b2-d474-4e68-b3be-5c357740f82f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:02:24.627290 1709430 system_pods.go:61] "storage-provisioner" [486f6174-9eff-4afd-8b28-7f7f218f6341] Running
	I0817 03:02:24.627296 1709430 system_pods.go:74] duration metric: took 14.217459ms to wait for pod list to return data ...
	I0817 03:02:24.627312 1709430 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:02:24.630756 1709430 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:02:24.630785 1709430 node_conditions.go:123] node cpu capacity is 2
	I0817 03:02:24.630797 1709430 node_conditions.go:105] duration metric: took 3.48013ms to run NodePressure ...
	I0817 03:02:24.630836 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:02:24.880793 1709430 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0817 03:02:24.884801 1709430 kubeadm.go:746] kubelet initialised
	I0817 03:02:24.884820 1709430 kubeadm.go:747] duration metric: took 4.010036ms waiting for restarted kubelet to initialise ...
	I0817 03:02:24.884827 1709430 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:02:24.889652 1709430 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-dgbzs" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:24.902669 1709430 pod_ready.go:92] pod "coredns-558bd4d5db-dgbzs" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:24.902694 1709430 pod_ready.go:81] duration metric: took 13.016142ms waiting for pod "coredns-558bd4d5db-dgbzs" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:24.902704 1709430 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:26.912028 1709430 pod_ready.go:102] pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:29.412961 1709430 pod_ready.go:102] pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:31.412833 1709430 pod_ready.go:92] pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:31.412855 1709430 pod_ready.go:81] duration metric: took 6.510143114s waiting for pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:31.412884 1709430 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.895057 1683677 kubeadm.go:392] StartCluster complete in 11m24.90769791s
	I0817 03:02:32.895103 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0817 03:02:32.895159 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0817 03:02:32.918790 1683677 cri.go:76] found id: ""
	I0817 03:02:32.918806 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.918827 1683677 logs.go:272] No container was found matching "kube-apiserver"
	I0817 03:02:32.918833 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0817 03:02:32.918883 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0817 03:02:32.949174 1683677 cri.go:76] found id: ""
	I0817 03:02:32.949187 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.949193 1683677 logs.go:272] No container was found matching "etcd"
	I0817 03:02:32.949198 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0817 03:02:32.949239 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0817 03:02:32.969914 1683677 cri.go:76] found id: ""
	I0817 03:02:32.969929 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.969935 1683677 logs.go:272] No container was found matching "coredns"
	I0817 03:02:32.969939 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0817 03:02:32.969977 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0817 03:02:32.990332 1683677 cri.go:76] found id: ""
	I0817 03:02:32.990347 1683677 logs.go:270] 0 containers: []
	W0817 03:02:32.990353 1683677 logs.go:272] No container was found matching "kube-scheduler"
	I0817 03:02:32.990358 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0817 03:02:32.990402 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0817 03:02:33.012039 1683677 cri.go:76] found id: ""
	I0817 03:02:33.012053 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.012059 1683677 logs.go:272] No container was found matching "kube-proxy"
	I0817 03:02:33.012064 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0817 03:02:33.012102 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0817 03:02:33.032711 1683677 cri.go:76] found id: ""
	I0817 03:02:33.032724 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.032729 1683677 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 03:02:33.032734 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0817 03:02:33.032772 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0817 03:02:33.052565 1683677 cri.go:76] found id: ""
	I0817 03:02:33.052577 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.052582 1683677 logs.go:272] No container was found matching "storage-provisioner"
	I0817 03:02:33.052588 1683677 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0817 03:02:33.052623 1683677 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0817 03:02:33.077480 1683677 cri.go:76] found id: ""
	I0817 03:02:33.077492 1683677 logs.go:270] 0 containers: []
	W0817 03:02:33.077498 1683677 logs.go:272] No container was found matching "kube-controller-manager"
	I0817 03:02:33.077506 1683677 logs.go:123] Gathering logs for container status ...
	I0817 03:02:33.077517 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 03:02:33.100699 1683677 logs.go:123] Gathering logs for kubelet ...
	I0817 03:02:33.100718 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0817 03:02:33.129118 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:27 old-k8s-version-20210817024805-1554185 kubelet[14430]: F0817 03:02:27.532257   14430 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.138717 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:28 old-k8s-version-20210817024805-1554185 kubelet[14458]: F0817 03:02:28.447274   14458 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.148323 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:29 old-k8s-version-20210817024805-1554185 kubelet[14486]: F0817 03:02:29.466067   14486 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.157836 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:30 old-k8s-version-20210817024805-1554185 kubelet[14514]: F0817 03:02:30.451892   14514 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.167337 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:31 old-k8s-version-20210817024805-1554185 kubelet[14542]: F0817 03:02:31.453780   14542 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0817 03:02:33.176828 1683677 logs.go:138] Found kubelet problem: Aug 17 03:02:32 old-k8s-version-20210817024805-1554185 kubelet[14570]: F0817 03:02:32.493682   14570 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.176996 1683677 logs.go:123] Gathering logs for dmesg ...
	I0817 03:02:33.177009 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 03:02:33.193572 1683677 logs.go:123] Gathering logs for describe nodes ...
	I0817 03:02:33.193593 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0817 03:02:33.276403 1683677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0817 03:02:33.276429 1683677 logs.go:123] Gathering logs for containerd ...
	I0817 03:02:33.276441 1683677 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W0817 03:02:33.361580 1683677 out.go:371] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	W0817 03:02:33.361625 1683677 out.go:242] * 
	W0817 03:02:33.361820 1683677 out.go:242] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0817 03:02:33.361869 1683677 out.go:242] * 
	W0817 03:02:33.367626 1683677 out.go:242] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                                  │
	│                                                                                                                                                                │
	│    * Please attach the following file to the GitHub issue:                                                                                                     │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 03:02:33.369951 1683677 out.go:177] X Problems detected in kubelet:
	I0817 03:02:33.371762 1683677 out.go:177]   Aug 17 03:02:27 old-k8s-version-20210817024805-1554185 kubelet[14430]: F0817 03:02:27.532257   14430 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.373375 1683677 out.go:177]   Aug 17 03:02:28 old-k8s-version-20210817024805-1554185 kubelet[14458]: F0817 03:02:28.447274   14458 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.375775 1683677 out.go:177]   Aug 17 03:02:29 old-k8s-version-20210817024805-1554185 kubelet[14486]: F0817 03:02:29.466067   14486 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0817 03:02:33.379649 1683677 out.go:177] 
	W0817 03:02:33.379877 1683677 out.go:242] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0817 03:02:33.379979 1683677 out.go:242] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0817 03:02:33.380044 1683677 out.go:242] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0817 03:02:32.926388 1709430 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:32.926430 1709430 pod_ready.go:81] duration metric: took 1.513533882s waiting for pod "kube-apiserver-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.926455 1709430 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.933680 1709430 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:32.933699 1709430 pod_ready.go:81] duration metric: took 7.225348ms waiting for pod "kube-controller-manager-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.933710 1709430 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nxbdw" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.937869 1709430 pod_ready.go:92] pod "kube-proxy-nxbdw" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:32.937883 1709430 pod_ready.go:81] duration metric: took 4.167442ms waiting for pod "kube-proxy-nxbdw" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.937891 1709430 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.941486 1709430 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:02:32.941528 1709430 pod_ready.go:81] duration metric: took 3.629338ms waiting for pod "kube-scheduler-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:32.941550 1709430 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace to be "Ready" ...
	I0817 03:02:35.015397 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:37.015738 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:39.514875 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:41.515474 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:44.015526 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:46.515154 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:48.516349 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:51.015902 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:53.016610 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:55.516927 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:02:58.015378 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:00.016324 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:02.514890 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:04.515285 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:06.515767 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:09.016036 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:11.515428 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:13.515969 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:16.015790 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:18.016205 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:20.515609 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:23.061025 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:25.515853 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:27.520053 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:30.016199 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:32.515382 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:34.515445 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:36.515518 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:39.015854 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:41.515524 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:43.518482 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:46.015697 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:48.515478 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:50.515773 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:53.016600 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:55.516532 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:03:58.015563 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:00.015859 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:02.016302 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:04.515245 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:06.516063 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:09.015932 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:11.515424 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:14.016419 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:16.515655 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:18.520563 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:21.015597 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:23.015859 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:25.514913 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:28.015392 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:30.015742 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:32.515332 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:35.016435 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:37.520050 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:40.016544 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:42.515734 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:45.016161 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:47.515706 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:50.015655 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:52.514906 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:54.515311 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:57.015594 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:04:59.016480 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:01.516175 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:04.015967 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:06.016228 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:08.515329 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:10.515452 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:13.015531 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:15.016010 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:17.515332 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:19.515502 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:21.515626 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:23.515748 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:26.015837 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:28.516107 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:31.015443 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:33.016321 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:35.516016 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:38.014953 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:40.015429 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:42.514805 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:44.515096 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:47.015351 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:49.016205 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:51.515273 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:53.516058 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:56.015790 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:05:58.016407 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:00.515481 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:02.520899 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:05.068189 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:07.516194 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:10.015808 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:12.016283 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:14.515104 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:17.015776 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:19.515286 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:21.516219 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:24.015645 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:26.015874 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:28.016108 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:30.016312 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:32.515227 1709430 pod_ready.go:102] pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace has status "Ready":"False"
	I0817 03:06:33.011802 1709430 pod_ready.go:81] duration metric: took 4m0.070227103s waiting for pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace to be "Ready" ...
	E0817 03:06:33.011829 1709430 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-7snbh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0817 03:06:33.011847 1709430 pod_ready.go:38] duration metric: took 4m8.12699401s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:06:33.011876 1709430 kubeadm.go:604] restartCluster took 4m32.503126921s
	W0817 03:06:33.011996 1709430 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 03:06:33.012027 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0817 03:06:35.083075 1709430 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.07102551s)
	I0817 03:06:35.083140 1709430 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0817 03:06:35.092846 1709430 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 03:06:35.092902 1709430 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:06:35.115936 1709430 cri.go:76] found id: ""
	I0817 03:06:35.115984 1709430 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 03:06:35.122087 1709430 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 03:06:35.122134 1709430 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 03:06:35.129050 1709430 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 03:06:35.129082 1709430 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 03:06:35.407886 1709430 out.go:204]   - Generating certificates and keys ...
	I0817 03:06:37.162389 1709430 out.go:204]   - Booting up control plane ...
	I0817 03:06:57.233627 1709430 out.go:204]   - Configuring RBAC rules ...
	I0817 03:06:57.650410 1709430 cni.go:93] Creating CNI manager for ""
	I0817 03:06:57.650434 1709430 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:06:57.652653 1709430 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 03:06:57.652719 1709430 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 03:06:57.655675 1709430 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 03:06:57.655688 1709430 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 03:06:57.667447 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 03:06:58.040395 1709430 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 03:06:58.040521 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:06:58.040583 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=embed-certs-20210817025908-1554185 minikube.k8s.io/updated_at=2021_08_17T03_06_58_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:06:58.186502 1709430 ops.go:34] apiserver oom_adj: -16
	I0817 03:06:58.186627 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:06:58.766890 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:06:59.266830 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:06:59.766336 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:00.266383 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:00.767000 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:01.267124 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:01.767094 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:02.266349 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:02.766331 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:03.266338 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:03.766308 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:04.267140 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:04.766972 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:05.266794 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:05.766726 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:06.266684 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:06.766917 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:07.267140 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:07.766301 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:08.266344 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:08.767252 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:09.266788 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:09.766306 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:10.266605 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:10.767141 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:11.266958 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:11.766326 1709430 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:07:12.055816 1709430 kubeadm.go:985] duration metric: took 14.015338574s to wait for elevateKubeSystemPrivileges.
	I0817 03:07:12.055843 1709430 kubeadm.go:392] StartCluster complete in 5m11.588029043s
	I0817 03:07:12.055858 1709430 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:07:12.055936 1709430 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:07:12.057221 1709430 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:07:12.596076 1709430 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210817025908-1554185" rescaled to 1
	I0817 03:07:12.596148 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 03:07:12.596231 1709430 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 03:07:12.599437 1709430 out.go:177] * Verifying Kubernetes components...
	I0817 03:07:12.596510 1709430 config.go:177] Loaded profile config "embed-certs-20210817025908-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 03:07:12.596525 1709430 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0817 03:07:12.599568 1709430 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210817025908-1554185"
	I0817 03:07:12.599582 1709430 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210817025908-1554185"
	W0817 03:07:12.599588 1709430 addons.go:147] addon storage-provisioner should already be in state true
	I0817 03:07:12.599609 1709430 host.go:66] Checking if "embed-certs-20210817025908-1554185" exists ...
	I0817 03:07:12.600102 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:07:12.600276 1709430 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:07:12.600363 1709430 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210817025908-1554185"
	I0817 03:07:12.600396 1709430 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210817025908-1554185"
	I0817 03:07:12.600657 1709430 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210817025908-1554185"
	I0817 03:07:12.600690 1709430 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210817025908-1554185"
	W0817 03:07:12.600708 1709430 addons.go:147] addon metrics-server should already be in state true
	I0817 03:07:12.600749 1709430 host.go:66] Checking if "embed-certs-20210817025908-1554185" exists ...
	I0817 03:07:12.601378 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:07:12.602125 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:07:12.601598 1709430 addons.go:59] Setting dashboard=true in profile "embed-certs-20210817025908-1554185"
	I0817 03:07:12.602383 1709430 addons.go:135] Setting addon dashboard=true in "embed-certs-20210817025908-1554185"
	W0817 03:07:12.602391 1709430 addons.go:147] addon dashboard should already be in state true
	I0817 03:07:12.602410 1709430 host.go:66] Checking if "embed-certs-20210817025908-1554185" exists ...
	I0817 03:07:12.602855 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:07:12.771209 1709430 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210817025908-1554185"
	W0817 03:07:12.771230 1709430 addons.go:147] addon default-storageclass should already be in state true
	I0817 03:07:12.771254 1709430 host.go:66] Checking if "embed-certs-20210817025908-1554185" exists ...
	I0817 03:07:12.771701 1709430 cli_runner.go:115] Run: docker container inspect embed-certs-20210817025908-1554185 --format={{.State.Status}}
	I0817 03:07:12.776770 1709430 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 03:07:12.776886 1709430 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:07:12.776895 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 03:07:12.776952 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:07:12.780675 1709430 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210817025908-1554185" to be "Ready" ...
	I0817 03:07:12.781000 1709430 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 03:07:12.821302 1709430 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0817 03:07:12.823121 1709430 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0817 03:07:12.823168 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0817 03:07:12.823176 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0817 03:07:12.823229 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:07:12.820718 1709430 node_ready.go:49] node "embed-certs-20210817025908-1554185" has status "Ready":"True"
	I0817 03:07:12.823410 1709430 node_ready.go:38] duration metric: took 42.704745ms waiting for node "embed-certs-20210817025908-1554185" to be "Ready" ...
	I0817 03:07:12.823423 1709430 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:07:12.826446 1709430 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0817 03:07:12.826496 1709430 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 03:07:12.826507 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 03:07:12.826547 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:07:12.856105 1709430 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-cstpw" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:12.970871 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:07:12.976655 1709430 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 03:07:12.976671 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 03:07:12.976720 1709430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210817025908-1554185
	I0817 03:07:12.993936 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:07:13.034726 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:07:13.072093 1709430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50483 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210817025908-1554185/id_rsa Username:docker}
	I0817 03:07:13.467142 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0817 03:07:13.467165 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0817 03:07:13.478535 1709430 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 03:07:13.478553 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0817 03:07:13.502671 1709430 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 03:07:13.634770 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0817 03:07:13.634832 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0817 03:07:13.671905 1709430 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 03:07:13.671959 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 03:07:13.808076 1709430 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 03:07:13.808105 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 03:07:13.831406 1709430 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 03:07:13.903914 1709430 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:07:13.913456 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0817 03:07:13.913504 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0817 03:07:14.001089 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0817 03:07:14.001148 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0817 03:07:14.095464 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0817 03:07:14.095487 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0817 03:07:14.191659 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0817 03:07:14.191682 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0817 03:07:14.353288 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0817 03:07:14.353352 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0817 03:07:14.438064 1709430 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.657042641s)
	I0817 03:07:14.438131 1709430 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0817 03:07:14.482667 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0817 03:07:14.482699 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0817 03:07:14.574325 1709430 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 03:07:14.574386 1709430 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0817 03:07:14.647184 1709430 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.144449438s)
	I0817 03:07:14.687797 1709430 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 03:07:14.962488 1709430 pod_ready.go:102] pod "coredns-558bd4d5db-cstpw" in "kube-system" namespace has status "Ready":"False"
	I0817 03:07:15.090422 1709430 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.258942039s)
	I0817 03:07:15.090538 1709430 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210817025908-1554185"
	I0817 03:07:15.090503 1709430 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18653069s)
	I0817 03:07:15.799090 1709430 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.111220812s)
	I0817 03:07:15.801079 1709430 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0817 03:07:15.801144 1709430 addons.go:344] enableAddons completed in 3.204622086s
	I0817 03:07:17.384241 1709430 pod_ready.go:102] pod "coredns-558bd4d5db-cstpw" in "kube-system" namespace has status "Ready":"False"
	I0817 03:07:19.883706 1709430 pod_ready.go:92] pod "coredns-558bd4d5db-cstpw" in "kube-system" namespace has status "Ready":"True"
	I0817 03:07:19.883726 1709430 pod_ready.go:81] duration metric: took 7.027596445s waiting for pod "coredns-558bd4d5db-cstpw" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:19.883735 1709430 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-p9qqd" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:21.892953 1709430 pod_ready.go:102] pod "coredns-558bd4d5db-p9qqd" in "kube-system" namespace has status "Ready":"False"
	I0817 03:07:23.894292 1709430 pod_ready.go:102] pod "coredns-558bd4d5db-p9qqd" in "kube-system" namespace has status "Ready":"False"
	I0817 03:07:26.391489 1709430 pod_ready.go:97] error getting pod "coredns-558bd4d5db-p9qqd" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-p9qqd" not found
	I0817 03:07:26.391521 1709430 pod_ready.go:81] duration metric: took 6.507778506s waiting for pod "coredns-558bd4d5db-p9qqd" in "kube-system" namespace to be "Ready" ...
	E0817 03:07:26.391531 1709430 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-p9qqd" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-p9qqd" not found
	I0817 03:07:26.391538 1709430 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.395159 1709430 pod_ready.go:92] pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:07:26.395177 1709430 pod_ready.go:81] duration metric: took 3.630141ms waiting for pod "etcd-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.395189 1709430 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.398999 1709430 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:07:26.399015 1709430 pod_ready.go:81] duration metric: took 3.818282ms waiting for pod "kube-apiserver-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.399023 1709430 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.404141 1709430 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:07:26.404158 1709430 pod_ready.go:81] duration metric: took 5.128832ms waiting for pod "kube-controller-manager-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.404167 1709430 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cjs8q" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.412964 1709430 pod_ready.go:92] pod "kube-proxy-cjs8q" in "kube-system" namespace has status "Ready":"True"
	I0817 03:07:26.412982 1709430 pod_ready.go:81] duration metric: took 8.808893ms waiting for pod "kube-proxy-cjs8q" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.412991 1709430 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.591110 1709430 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210817025908-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:07:26.591125 1709430 pod_ready.go:81] duration metric: took 178.126156ms waiting for pod "kube-scheduler-embed-certs-20210817025908-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:07:26.591133 1709430 pod_ready.go:38] duration metric: took 13.767698541s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:07:26.591149 1709430 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:07:26.591192 1709430 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:07:26.607203 1709430 api_server.go:70] duration metric: took 14.010942606s to wait for apiserver process to appear ...
	I0817 03:07:26.607257 1709430 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:07:26.607278 1709430 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:07:26.615555 1709430 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:07:26.616288 1709430 api_server.go:139] control plane version: v1.21.3
	I0817 03:07:26.616301 1709430 api_server.go:129] duration metric: took 9.027861ms to wait for apiserver health ...
	I0817 03:07:26.616308 1709430 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:07:26.794248 1709430 system_pods.go:59] 9 kube-system pods found
	I0817 03:07:26.794280 1709430 system_pods.go:61] "coredns-558bd4d5db-cstpw" [27d17672-32a4-41f6-829e-7536a182784e] Running
	I0817 03:07:26.794286 1709430 system_pods.go:61] "etcd-embed-certs-20210817025908-1554185" [ac48f9b3-c5d0-4533-9703-604b37fcf80a] Running
	I0817 03:07:26.794291 1709430 system_pods.go:61] "kindnet-cnwp8" [76ff7eb8-7cd3-45f4-8651-a91c5f883da1] Running
	I0817 03:07:26.794296 1709430 system_pods.go:61] "kube-apiserver-embed-certs-20210817025908-1554185" [ca9c6e16-335a-40a8-9194-c17f7fd7b828] Running
	I0817 03:07:26.794323 1709430 system_pods.go:61] "kube-controller-manager-embed-certs-20210817025908-1554185" [aa65bef2-d3bc-46e4-9bb0-3688bbdd8d34] Running
	I0817 03:07:26.794335 1709430 system_pods.go:61] "kube-proxy-cjs8q" [9d9df0cd-9f52-42a0-80dc-0d78009fd46c] Running
	I0817 03:07:26.794340 1709430 system_pods.go:61] "kube-scheduler-embed-certs-20210817025908-1554185" [83426ee4-c7fa-4cf3-a49f-d115b900a37e] Running
	I0817 03:07:26.794347 1709430 system_pods.go:61] "metrics-server-7c784ccb57-48wrs" [1573fd26-713e-4757-9c30-cdb6f8181a96] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:07:26.794360 1709430 system_pods.go:61] "storage-provisioner" [09ad25fa-17f2-48b6-b8fc-fe277ad894a1] Running
	I0817 03:07:26.794366 1709430 system_pods.go:74] duration metric: took 178.052984ms to wait for pod list to return data ...
	I0817 03:07:26.794373 1709430 default_sa.go:34] waiting for default service account to be created ...
	I0817 03:07:26.991145 1709430 default_sa.go:45] found service account: "default"
	I0817 03:07:26.991162 1709430 default_sa.go:55] duration metric: took 196.767324ms for default service account to be created ...
	I0817 03:07:26.991169 1709430 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 03:07:27.193745 1709430 system_pods.go:86] 9 kube-system pods found
	I0817 03:07:27.193778 1709430 system_pods.go:89] "coredns-558bd4d5db-cstpw" [27d17672-32a4-41f6-829e-7536a182784e] Running
	I0817 03:07:27.193785 1709430 system_pods.go:89] "etcd-embed-certs-20210817025908-1554185" [ac48f9b3-c5d0-4533-9703-604b37fcf80a] Running
	I0817 03:07:27.193790 1709430 system_pods.go:89] "kindnet-cnwp8" [76ff7eb8-7cd3-45f4-8651-a91c5f883da1] Running
	I0817 03:07:27.193797 1709430 system_pods.go:89] "kube-apiserver-embed-certs-20210817025908-1554185" [ca9c6e16-335a-40a8-9194-c17f7fd7b828] Running
	I0817 03:07:27.193803 1709430 system_pods.go:89] "kube-controller-manager-embed-certs-20210817025908-1554185" [aa65bef2-d3bc-46e4-9bb0-3688bbdd8d34] Running
	I0817 03:07:27.193807 1709430 system_pods.go:89] "kube-proxy-cjs8q" [9d9df0cd-9f52-42a0-80dc-0d78009fd46c] Running
	I0817 03:07:27.193813 1709430 system_pods.go:89] "kube-scheduler-embed-certs-20210817025908-1554185" [83426ee4-c7fa-4cf3-a49f-d115b900a37e] Running
	I0817 03:07:27.193824 1709430 system_pods.go:89] "metrics-server-7c784ccb57-48wrs" [1573fd26-713e-4757-9c30-cdb6f8181a96] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:07:27.193832 1709430 system_pods.go:89] "storage-provisioner" [09ad25fa-17f2-48b6-b8fc-fe277ad894a1] Running
	I0817 03:07:27.193846 1709430 system_pods.go:126] duration metric: took 202.650824ms to wait for k8s-apps to be running ...
	I0817 03:07:27.193853 1709430 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 03:07:27.193907 1709430 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:07:27.202944 1709430 system_svc.go:56] duration metric: took 9.088603ms WaitForService to wait for kubelet.
	I0817 03:07:27.202960 1709430 kubeadm.go:547] duration metric: took 14.606702951s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 03:07:27.202978 1709430 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:07:27.390641 1709430 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:07:27.390662 1709430 node_conditions.go:123] node cpu capacity is 2
	I0817 03:07:27.390673 1709430 node_conditions.go:105] duration metric: took 187.69011ms to run NodePressure ...
	I0817 03:07:27.390682 1709430 start.go:231] waiting for startup goroutines ...
	I0817 03:07:27.446694 1709430 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0817 03:07:27.449081 1709430 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210817025908-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	104d615c49012       523cad1a4df73       20 seconds ago      Exited              dashboard-metrics-scraper   1                   6a4894b67801f
	08e5ff1e049d0       85e6c0cff043f       26 seconds ago      Running             kubernetes-dashboard        0                   d0ffff888b08e
	7d3caf80168a2       ba04bb24b9575       27 seconds ago      Running             storage-provisioner         0                   92635160ad3b9
	20c35ea263834       1a1f05a2cd7c2       29 seconds ago      Running             coredns                     0                   ce0590e491e2d
	e08609b351821       f37b7c809e5dc       30 seconds ago      Running             kindnet-cni                 0                   cdce4507fb252
	f780ed4300cf6       4ea38350a1beb       30 seconds ago      Running             kube-proxy                  0                   7291df2308366
	feb9dce8235d6       31a3b96cefc1e       55 seconds ago      Running             kube-scheduler              0                   8cb8e2f03ce9b
	dcde32be4bc6b       44a6d50ef170d       55 seconds ago      Running             kube-apiserver              0                   3a5be5205d51f
	ed91afd79a1d0       05b738aa1bc63       55 seconds ago      Running             etcd                        0                   72070b39b4062
	293dfb27a1bc6       cb310ff289d79       55 seconds ago      Running             kube-controller-manager     0                   79b3b926c5f79
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 03:01:44 UTC, end at Tue 2021-08-17 03:07:43 UTC. --
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.757799740Z" level=info msg="StopContainer for \"124c49be3fc0f0c4acc555807fd802e5e0dc6a95b32b84c1576b8ec395b0d8a3\" returns successfully"
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.758184302Z" level=info msg="StopPodSandbox for \"93ba703241bd4c9fa69dcaeeac49325a5cd82015d74fdbd26df7140d300451b1\""
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.758248031Z" level=info msg="Container to stop \"124c49be3fc0f0c4acc555807fd802e5e0dc6a95b32b84c1576b8ec395b0d8a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.772014632Z" level=info msg="TaskExit event &TaskExit{ContainerID:93ba703241bd4c9fa69dcaeeac49325a5cd82015d74fdbd26df7140d300451b1,ID:93ba703241bd4c9fa69dcaeeac49325a5cd82015d74fdbd26df7140d300451b1,Pid:5109,ExitStatus:137,ExitedAt:2021-08-17 03:07:21.77186663 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.795017322Z" level=info msg="shim disconnected" id=93ba703241bd4c9fa69dcaeeac49325a5cd82015d74fdbd26df7140d300451b1
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.795072239Z" level=error msg="copy shim log" error="read /proc/self/fd/77: file already closed"
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.824946570Z" level=info msg="TearDown network for sandbox \"93ba703241bd4c9fa69dcaeeac49325a5cd82015d74fdbd26df7140d300451b1\" successfully"
	Aug 17 03:07:21 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:21.824976978Z" level=info msg="StopPodSandbox for \"93ba703241bd4c9fa69dcaeeac49325a5cd82015d74fdbd26df7140d300451b1\" returns successfully"
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.251473038Z" level=info msg="CreateContainer within sandbox \"6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.253569106Z" level=info msg="RemoveContainer for \"124c49be3fc0f0c4acc555807fd802e5e0dc6a95b32b84c1576b8ec395b0d8a3\""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.270691939Z" level=info msg="RemoveContainer for \"124c49be3fc0f0c4acc555807fd802e5e0dc6a95b32b84c1576b8ec395b0d8a3\" returns successfully"
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.280454554Z" level=info msg="CreateContainer within sandbox \"6a4894b67801ff4e314c729de676fb5a076519cca521dbe9929a4d6be9021fa2\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a\""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.284109605Z" level=error msg="ContainerStatus for \"124c49be3fc0f0c4acc555807fd802e5e0dc6a95b32b84c1576b8ec395b0d8a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"124c49be3fc0f0c4acc555807fd802e5e0dc6a95b32b84c1576b8ec395b0d8a3\": not found"
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.284171266Z" level=info msg="StartContainer for \"104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a\""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.355718944Z" level=info msg="Finish piping stderr of container \"104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a\""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.355793733Z" level=info msg="Finish piping stdout of container \"104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a\""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.360330605Z" level=info msg="StartContainer for \"104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a\" returns successfully"
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.360416052Z" level=info msg="TaskExit event &TaskExit{ContainerID:104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a,ID:104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a,Pid:6194,ExitStatus:1,ExitedAt:2021-08-17 03:07:22.357262272 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.434625282Z" level=info msg="shim disconnected" id=104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:22.434780071Z" level=error msg="copy shim log" error="read /proc/self/fd/99: file already closed"
	Aug 17 03:07:23 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:23.257450513Z" level=info msg="RemoveContainer for \"1997fae2ba8dd57148b00caa95caf58f3dd2dd4a8f8fb73d023d7146823bcd1a\""
	Aug 17 03:07:23 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:23.262189048Z" level=info msg="RemoveContainer for \"1997fae2ba8dd57148b00caa95caf58f3dd2dd4a8f8fb73d023d7146823bcd1a\" returns successfully"
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:28.164761565Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:28.169415917Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 containerd[343]: time="2021-08-17T03:07:28.172218825Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	
	* 
	* ==> coredns [20c35ea263834a5dfc0400603cebb82a9190043e7c342025cf4119a227454c62] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20210817025908-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-20210817025908-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=embed-certs-20210817025908-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T03_06_58_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 03:06:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20210817025908-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 03:07:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 03:07:36 +0000   Tue, 17 Aug 2021 03:06:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 03:07:36 +0000   Tue, 17 Aug 2021 03:06:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 03:07:36 +0000   Tue, 17 Aug 2021 03:06:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 03:07:36 +0000   Tue, 17 Aug 2021 03:07:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    embed-certs-20210817025908-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                6441d343-a3e8-41b6-b426-f9fef981de1d
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-cstpw                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     31s
	  kube-system                 etcd-embed-certs-20210817025908-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         37s
	  kube-system                 kindnet-cnwp8                                                 100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      32s
	  kube-system                 kube-apiserver-embed-certs-20210817025908-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-controller-manager-embed-certs-20210817025908-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-proxy-cjs8q                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-scheduler-embed-certs-20210817025908-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 metrics-server-7c784ccb57-48wrs                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (3%!)(MISSING)       0 (0%!)(MISSING)         29s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-gdlf6                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-wrgrg                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             520Mi (6%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  57s (x5 over 57s)  kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x4 over 57s)  kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x4 over 57s)  kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeHasSufficientPID
	  Normal  Starting                 38s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s                kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s                kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                32s                kubelet     Node embed-certs-20210817025908-1554185 status is now: NodeReady
	  Normal  Starting                 30s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> etcd [ed91afd79a1d0987c4764bcd6536ad4166d7ed156fe51e408d5d7bb4ceeee445] <==
	* 2021-08-17 03:06:47.465645 W | auth: simple token is not cryptographically signed
	2021-08-17 03:06:47.483862 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-17 03:06:47.484214 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/17 03:06:47 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 03:06:47.484589 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-17 03:06:47.486877 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 03:06:47.486979 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-17 03:06:47.487149 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/17 03:06:47 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 03:06:47 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 03:06:47 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 03:06:47 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 03:06:47 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 03:06:47.872537 I | etcdserver: published {Name:embed-certs-20210817025908-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 03:06:47.872661 I | embed: ready to serve client requests
	2021-08-17 03:06:47.874159 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 03:06:47.874369 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 03:06:47.874585 I | embed: ready to serve client requests
	2021-08-17 03:06:47.874877 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 03:06:47.874936 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 03:06:47.876101 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 03:07:09.768120 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 03:07:13.536783 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 03:07:23.535879 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 03:07:33.536475 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  03:07:43 up 10:50,  0 users,  load average: 2.09, 1.47, 1.53
	Linux embed-certs-20210817025908-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [dcde32be4bc6b57c57009c7e84b5e926b1ae40aaee60bc99390c730e27503fdf] <==
	* I0817 03:06:54.810683       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0817 03:06:54.890229       1 controller.go:611] quota admission added evaluator for: namespaces
	I0817 03:06:55.475186       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 03:06:55.475364       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 03:06:55.480083       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0817 03:06:55.482827       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0817 03:06:55.482848       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 03:06:55.908450       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 03:06:55.948487       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 03:06:56.070802       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0817 03:06:56.071935       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 03:06:56.075413       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 03:06:56.570887       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 03:06:57.211421       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 03:06:57.506639       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 03:06:57.541978       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 03:07:11.644117       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0817 03:07:12.005729       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	W0817 03:07:17.216518       1 handler_proxy.go:102] no RequestInfo found in the context
	E0817 03:07:17.216578       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 03:07:17.216585       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 03:07:26.025049       1 client.go:360] parsed scheme: "passthrough"
	I0817 03:07:26.025089       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 03:07:26.025221       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [293dfb27a1bc65d52292b739cfe59a41a89f2458295f20ab94a2158cfad16095] <==
	* I0817 03:07:14.945064       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0817 03:07:14.984614       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0817 03:07:14.985230       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	I0817 03:07:14.992388       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-48wrs"
	I0817 03:07:15.355139       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0817 03:07:15.375358       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0817 03:07:15.394406       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:07:15.394437       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:07:15.405651       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:07:15.405746       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:07:15.439796       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:07:15.440142       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:07:15.440482       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:07:15.440499       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:07:15.453265       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:07:15.453958       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:07:15.454110       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:07:15.454205       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:07:15.468546       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:07:15.468717       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:07:15.472686       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:07:15.472870       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:07:15.503469       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-gdlf6"
	I0817 03:07:15.503815       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-wrgrg"
	I0817 03:07:16.064919       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [f780ed4300cf618bb134baa365ed9c14f74ed57d3bc0c4664ec3c94f89b83df9] <==
	* I0817 03:07:13.696074       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 03:07:13.696114       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 03:07:13.696133       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 03:07:13.721438       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 03:07:13.721467       1 server_others.go:212] Using iptables Proxier.
	I0817 03:07:13.721480       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 03:07:13.721490       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 03:07:13.721750       1 server.go:643] Version: v1.21.3
	I0817 03:07:13.722292       1 config.go:315] Starting service config controller
	I0817 03:07:13.722307       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 03:07:13.722976       1 config.go:224] Starting endpoint slice config controller
	I0817 03:07:13.723000       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0817 03:07:13.728562       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0817 03:07:13.801992       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 03:07:13.822942       1 shared_informer.go:247] Caches are synced for service config 
	I0817 03:07:13.826983       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [feb9dce8235d6680a74d1250c1b155bed5fe3e1b90efe9c40708a7c382b8eedb] <==
	* W0817 03:06:54.683767       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0817 03:06:54.683945       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 03:06:54.684042       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 03:06:54.684108       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 03:06:54.736843       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0817 03:06:54.759079       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 03:06:54.771769       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 03:06:54.771907       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0817 03:06:54.823892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 03:06:54.824116       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 03:06:54.824293       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 03:06:54.824475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 03:06:54.824654       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 03:06:54.829806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 03:06:54.830040       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 03:06:54.830230       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 03:06:54.830432       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 03:06:54.830627       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 03:06:54.830835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 03:06:54.831013       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 03:06:54.831221       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 03:06:54.836466       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 03:06:55.666467       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 03:06:55.667141       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0817 03:06:56.472233       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 03:01:44 UTC, end at Tue 2021-08-17 03:07:43 UTC. --
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:22.488635    4620 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43d065c7-802f-4efb-a8de-757cd1544054-config-volume\") pod \"43d065c7-802f-4efb-a8de-757cd1544054\" (UID: \"43d065c7-802f-4efb-a8de-757cd1544054\") "
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:22.488702    4620 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gns2m\" (UniqueName: \"kubernetes.io/projected/43d065c7-802f-4efb-a8de-757cd1544054-kube-api-access-gns2m\") pod \"43d065c7-802f-4efb-a8de-757cd1544054\" (UID: \"43d065c7-802f-4efb-a8de-757cd1544054\") "
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: W0817 03:07:22.489716    4620 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/43d065c7-802f-4efb-a8de-757cd1544054/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:22.489856    4620 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43d065c7-802f-4efb-a8de-757cd1544054-config-volume" (OuterVolumeSpecName: "config-volume") pod "43d065c7-802f-4efb-a8de-757cd1544054" (UID: "43d065c7-802f-4efb-a8de-757cd1544054"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:22.493418    4620 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d065c7-802f-4efb-a8de-757cd1544054-kube-api-access-gns2m" (OuterVolumeSpecName: "kube-api-access-gns2m") pod "43d065c7-802f-4efb-a8de-757cd1544054" (UID: "43d065c7-802f-4efb-a8de-757cd1544054"). InnerVolumeSpecName "kube-api-access-gns2m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:22.589722    4620 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43d065c7-802f-4efb-a8de-757cd1544054-config-volume\") on node \"embed-certs-20210817025908-1554185\" DevicePath \"\""
	Aug 17 03:07:22 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:22.589760    4620 reconciler.go:319] "Volume detached for volume \"kube-api-access-gns2m\" (UniqueName: \"kubernetes.io/projected/43d065c7-802f-4efb-a8de-757cd1544054-kube-api-access-gns2m\") on node \"embed-certs-20210817025908-1554185\" DevicePath \"\""
	Aug 17 03:07:23 embed-certs-20210817025908-1554185 kubelet[4620]: W0817 03:07:23.032258    4620 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod2ecf1109-a9f9-4504-a5ac-e2dd767aa611/1997fae2ba8dd57148b00caa95caf58f3dd2dd4a8f8fb73d023d7146823bcd1a WatchSource:0}: task 1997fae2ba8dd57148b00caa95caf58f3dd2dd4a8f8fb73d023d7146823bcd1a not found: not found
	Aug 17 03:07:23 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:23.255765    4620 scope.go:111] "RemoveContainer" containerID="1997fae2ba8dd57148b00caa95caf58f3dd2dd4a8f8fb73d023d7146823bcd1a"
	Aug 17 03:07:23 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:23.256083    4620 scope.go:111] "RemoveContainer" containerID="104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a"
	Aug 17 03:07:23 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:23.256356    4620 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gdlf6_kubernetes-dashboard(2ecf1109-a9f9-4504-a5ac-e2dd767aa611)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gdlf6" podUID=2ecf1109-a9f9-4504-a5ac-e2dd767aa611
	Aug 17 03:07:24 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:24.258670    4620 scope.go:111] "RemoveContainer" containerID="104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a"
	Aug 17 03:07:24 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:24.258988    4620 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gdlf6_kubernetes-dashboard(2ecf1109-a9f9-4504-a5ac-e2dd767aa611)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gdlf6" podUID=2ecf1109-a9f9-4504-a5ac-e2dd767aa611
	Aug 17 03:07:24 embed-certs-20210817025908-1554185 kubelet[4620]: W0817 03:07:24.537147    4620 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod2ecf1109-a9f9-4504-a5ac-e2dd767aa611/104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a WatchSource:0}: task 104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a not found: not found
	Aug 17 03:07:26 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:26.425591    4620 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pod1573fd26-713e-4757-9c30-cdb6f8181a96\": RecentStats: unable to find data in memory cache], [\"/kubepods/besteffort/pod09ad25fa-17f2-48b6-b8fc-fe277ad894a1\": RecentStats: unable to find data in memory cache]"
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:28.172390    4620 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:28.172447    4620 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:28.172573    4620 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sm4xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Hand
ler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[
]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-48wrs_kube-system(1573fd26-713e-4757-9c30-cdb6f8181a96): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:28.172630    4620 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-7c784ccb57-48wrs" podUID=1573fd26-713e-4757-9c30-cdb6f8181a96
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:28.718906    4620 scope.go:111] "RemoveContainer" containerID="104d615c49012f80e5cfc884ef1796952122aec5121a4ceca20253024aeda74a"
	Aug 17 03:07:28 embed-certs-20210817025908-1554185 kubelet[4620]: E0817 03:07:28.719376    4620 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-gdlf6_kubernetes-dashboard(2ecf1109-a9f9-4504-a5ac-e2dd767aa611)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-gdlf6" podUID=2ecf1109-a9f9-4504-a5ac-e2dd767aa611
	Aug 17 03:07:38 embed-certs-20210817025908-1554185 kubelet[4620]: I0817 03:07:38.647870    4620 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 17 03:07:38 embed-certs-20210817025908-1554185 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 17 03:07:38 embed-certs-20210817025908-1554185 systemd[1]: kubelet.service: Succeeded.
	Aug 17 03:07:38 embed-certs-20210817025908-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [08e5ff1e049d04e69e1bc93715ba24a1a241003239a9ab427928dc9400a2eaa2] <==
	* 2021/08/17 03:07:16 Starting overwatch
	2021/08/17 03:07:16 Using namespace: kubernetes-dashboard
	2021/08/17 03:07:16 Using in-cluster config to connect to apiserver
	2021/08/17 03:07:16 Using secret token for csrf signing
	2021/08/17 03:07:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/17 03:07:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/17 03:07:16 Successful initial request to the apiserver, version: v1.21.3
	2021/08/17 03:07:16 Generating JWE encryption key
	2021/08/17 03:07:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/17 03:07:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/17 03:07:17 Initializing JWE encryption key from synchronized object
	2021/08/17 03:07:17 Creating in-cluster Sidecar client
	2021/08/17 03:07:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/17 03:07:17 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [7d3caf80168a280ca70084c3788249a5a822c47360e6998eaf1c3c0c9959489a] <==
	* I0817 03:07:16.237266       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 03:07:16.255013       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 03:07:16.255065       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 03:07:16.261589       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 03:07:16.261848       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210817025908-1554185_efd4082b-b30e-41ef-b4db-2d7c4933d659!
	I0817 03:07:16.262901       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c7fdbca1-fdf4-47fe-a32b-419df177bb7c", APIVersion:"v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210817025908-1554185_efd4082b-b30e-41ef-b4db-2d7c4933d659 became leader
	I0817 03:07:16.362974       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210817025908-1554185_efd4082b-b30e-41ef-b4db-2d7c4933d659!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-20210817025908-1554185 -n embed-certs-20210817025908-1554185

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-20210817025908-1554185 -n embed-certs-20210817025908-1554185: exit status 2 (377.243582ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20210817025908-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-48wrs
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20210817025908-1554185 describe pod metrics-server-7c784ccb57-48wrs
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20210817025908-1554185 describe pod metrics-server-7c784ccb57-48wrs: exit status 1 (77.456566ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-48wrs" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20210817025908-1554185 describe pod metrics-server-7c784ccb57-48wrs: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0817 03:13:14.883939 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 03:13:14.897996 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0817 03:13:31.847223 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0817 03:15:55.535259 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0817 03:17:18.576135 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0817 03:18:14.884093 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0817 03:18:31.848171 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0817 03:19:14.631642 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
E0817 03:19:14.636869 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
E0817 03:19:14.647051 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
E0817 03:19:14.667254 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
E0817 03:19:14.707446 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
E0817 03:19:14.787664 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
E0817 03:19:14.948009 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0817 03:19:15.268570 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
E0817 03:19:15.909241 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0817 03:19:17.190032 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0817 03:19:19.750933 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0817 03:19:24.871695 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0817 03:19:35.112067 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0817 03:19:37.930511 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0817 03:19:55.592519 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
start_stop_delete_test.go:260: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210817024805-1554185 -n old-k8s-version-20210817024805-1554185
start_stop_delete_test.go:260: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210817024805-1554185 -n old-k8s-version-20210817024805-1554185: exit status 2 (273.897582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:260: status error: exit status 2 (may be ok)
start_stop_delete_test.go:260: "old-k8s-version-20210817024805-1554185" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:261: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210817024805-1554185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:264: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210817024805-1554185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.395µs)
start_stop_delete_test.go:266: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20210817024805-1554185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:270: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210817024805-1554185
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210817024805-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29",
	        "Created": "2021-08-17T02:48:07.556948774Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1683873,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T02:50:51.260024317Z",
	            "FinishedAt": "2021-08-17T02:50:50.057096311Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29/hostname",
	        "HostsPath": "/var/lib/docker/containers/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29/hosts",
	        "LogPath": "/var/lib/docker/containers/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29/c8b9fbcd517c9d3b69706f1bbecd676db005fdedacfd96a665288c87c000ff29-json.log",
	        "Name": "/old-k8s-version-20210817024805-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210817024805-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210817024805-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/236bd68e5aeeab8c38058476cfda09686a3fcf6be0c71c5ac7a1ca8635135a12-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/236bd68e5aeeab8c38058476cfda09686a3fcf6be0c71c5ac7a1ca8635135a12/merged",
	                "UpperDir": "/var/lib/docker/overlay2/236bd68e5aeeab8c38058476cfda09686a3fcf6be0c71c5ac7a1ca8635135a12/diff",
	                "WorkDir": "/var/lib/docker/overlay2/236bd68e5aeeab8c38058476cfda09686a3fcf6be0c71c5ac7a1ca8635135a12/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210817024805-1554185",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210817024805-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210817024805-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210817024805-1554185",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210817024805-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8db859a5f76fa1e2614ca4a38811cf6cdc70c3b63b0f36c6d5b6de8b99796396",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50467"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50464"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50466"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50465"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8db859a5f76f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210817024805-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c8b9fbcd517c",
	                        "old-k8s-version-20210817024805-1554185"
	                    ],
	                    "NetworkID": "9aefabdb2d1d911a23f12e9e262da9d968a8cfa23ed9a2191472a782b604d2a8",
	                    "EndpointID": "1f6b1ef1bd2c282d73335e7da0951a5c768f124f1509e6b7cd10bfc8e555b194",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210817024805-1554185 -n old-k8s-version-20210817024805-1554185
E0817 03:20:36.553690 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210817024805-1554185 -n old-k8s-version-20210817024805-1554185: exit status 2 (344.011945ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-20210817024805-1554185 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 -p old-k8s-version-20210817024805-1554185 logs -n 25: exit status 110 (1.327273804s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                   Profile                    |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|----------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | embed-certs-20210817025908-1554185                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:42 UTC | Tue, 17 Aug 2021 03:07:43 UTC |
	|         | logs -n 25                                                 |                                              |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:44 UTC | Tue, 17 Aug 2021 03:07:47 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:47 UTC | Tue, 17 Aug 2021 03:07:48 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20210817030748-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:48 UTC | Tue, 17 Aug 2021 03:07:48 UTC |
	|         | disable-driver-mounts-20210817030748-1554185               |                                              |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:48 UTC | Tue, 17 Aug 2021 03:09:14 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                              |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                              |         |         |                               |                               |
	|         | --driver=docker                                            |                                              |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                              |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:24 UTC | Tue, 17 Aug 2021 03:09:24 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                              |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                              |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:25 UTC | Tue, 17 Aug 2021 03:09:45 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                              |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:45 UTC | Tue, 17 Aug 2021 03:09:45 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                              |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:45 UTC | Tue, 17 Aug 2021 03:15:18 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                              |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                              |         |         |                               |                               |
	|         | --driver=docker                                            |                                              |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                              |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:28 UTC | Tue, 17 Aug 2021 03:15:29 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                              |         |         |                               |                               |
	| -p      | no-preload-20210817030748-1554185                          | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:31 UTC | Tue, 17 Aug 2021 03:15:32 UTC |
	|         | logs -n 25                                                 |                                              |         |         |                               |                               |
	| -p      | no-preload-20210817030748-1554185                          | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:33 UTC | Tue, 17 Aug 2021 03:15:34 UTC |
	|         | logs -n 25                                                 |                                              |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:35 UTC | Tue, 17 Aug 2021 03:15:38 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:38 UTC | Tue, 17 Aug 2021 03:15:38 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	| start   | -p newest-cni-20210817031538-1554185 --memory=2200         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:39 UTC | Tue, 17 Aug 2021 03:17:03 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                              |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                              |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                              |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                              |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                              |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:17:03 UTC | Tue, 17 Aug 2021 03:17:04 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                              |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                              |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:17:04 UTC | Tue, 17 Aug 2021 03:17:24 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                              |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:17:24 UTC | Tue, 17 Aug 2021 03:17:24 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                              |         |         |                               |                               |
	| start   | -p newest-cni-20210817031538-1554185 --memory=2200         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:17:24 UTC | Tue, 17 Aug 2021 03:18:04 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                              |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                              |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                              |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                              |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                              |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:18:04 UTC | Tue, 17 Aug 2021 03:18:05 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                              |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:18:29 UTC | Tue, 17 Aug 2021 03:18:33 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:18:33 UTC | Tue, 17 Aug 2021 03:18:33 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	| start   | -p auto-20210817024630-1554185                             | auto-20210817024630-1554185                  | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:18:33 UTC | Tue, 17 Aug 2021 03:20:08 UTC |
	|         | --memory=2048                                              |                                              |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                              |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                              |         |         |                               |                               |
	|         | --driver=docker                                            |                                              |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                              |         |         |                               |                               |
	| ssh     | -p auto-20210817024630-1554185                             | auto-20210817024630-1554185                  | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:20:08 UTC | Tue, 17 Aug 2021 03:20:08 UTC |
	|         | pgrep -a kubelet                                           |                                              |         |         |                               |                               |
	| delete  | -p auto-20210817024630-1554185                             | auto-20210817024630-1554185                  | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:20:20 UTC | Tue, 17 Aug 2021 03:20:22 UTC |
	|---------|------------------------------------------------------------|----------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 03:20:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 03:20:22.836413 1768710 out.go:298] Setting OutFile to fd 1 ...
	I0817 03:20:22.836515 1768710 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:20:22.836527 1768710 out.go:311] Setting ErrFile to fd 2...
	I0817 03:20:22.836530 1768710 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:20:22.836666 1768710 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 03:20:22.836932 1768710 out.go:305] Setting JSON to false
	I0817 03:20:22.837859 1768710 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39761,"bootTime":1629130662,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 03:20:22.837930 1768710 start.go:121] virtualization:  
	I0817 03:20:22.841211 1768710 out.go:177] * [cilium-20210817024631-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 03:20:22.842861 1768710 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 03:20:22.844753 1768710 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:20:22.843809 1768710 notify.go:169] Checking for updates...
	I0817 03:20:22.846361 1768710 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 03:20:22.847850 1768710 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 03:20:22.848342 1768710 config.go:177] Loaded profile config "old-k8s-version-20210817024805-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0817 03:20:22.848385 1768710 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 03:20:22.887427 1768710 docker.go:132] docker version: linux-20.10.8
	I0817 03:20:22.887501 1768710 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:20:22.982591 1768710 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:20:22.924269928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 03:20:22.982686 1768710 docker.go:244] overlay module found
	I0817 03:20:22.984758 1768710 out.go:177] * Using the docker driver based on user configuration
	I0817 03:20:22.984779 1768710 start.go:278] selected driver: docker
	I0817 03:20:22.984790 1768710 start.go:751] validating driver "docker" against <nil>
	I0817 03:20:22.984804 1768710 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 03:20:22.984843 1768710 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:20:22.984857 1768710 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:20:22.986500 1768710 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:20:22.986774 1768710 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:20:23.064495 1768710 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:20:23.013717142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 03:20:23.064625 1768710 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0817 03:20:23.064788 1768710 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 03:20:23.064817 1768710 cni.go:93] Creating CNI manager for "cilium"
	I0817 03:20:23.064828 1768710 start_flags.go:272] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0817 03:20:23.064838 1768710 start_flags.go:277] config:
	{Name:cilium-20210817024631-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210817024631-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:20:23.066737 1768710 out.go:177] * Starting control plane node cilium-20210817024631-1554185 in cluster cilium-20210817024631-1554185
	I0817 03:20:23.066765 1768710 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 03:20:23.068349 1768710 out.go:177] * Pulling base image ...
	I0817 03:20:23.068370 1768710 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 03:20:23.068396 1768710 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 03:20:23.068408 1768710 cache.go:56] Caching tarball of preloaded images
	I0817 03:20:23.068530 1768710 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 03:20:23.068550 1768710 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 03:20:23.068642 1768710 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/config.json ...
	I0817 03:20:23.068664 1768710 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/config.json: {Name:mk94c327332fb696485f075094bc8230be0afbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:20:23.068800 1768710 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 03:20:23.111533 1768710 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 03:20:23.111554 1768710 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 03:20:23.111568 1768710 cache.go:205] Successfully downloaded all kic artifacts
	I0817 03:20:23.111599 1768710 start.go:313] acquiring machines lock for cilium-20210817024631-1554185: {Name:mk97f932f54458a22437a38c3a00eeb95e17ef3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:20:23.111710 1768710 start.go:317] acquired machines lock for "cilium-20210817024631-1554185" in 90.107µs
	I0817 03:20:23.111736 1768710 start.go:89] Provisioning new machine with config: &{Name:cilium-20210817024631-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210817024631-1554185 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 03:20:23.111815 1768710 start.go:126] createHost starting for "" (driver="docker")
	I0817 03:20:23.115096 1768710 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0817 03:20:23.115323 1768710 start.go:160] libmachine.API.Create for "cilium-20210817024631-1554185" (driver="docker")
	I0817 03:20:23.115350 1768710 client.go:168] LocalClient.Create starting
	I0817 03:20:23.115423 1768710 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0817 03:20:23.115483 1768710 main.go:130] libmachine: Decoding PEM data...
	I0817 03:20:23.115504 1768710 main.go:130] libmachine: Parsing certificate...
	I0817 03:20:23.115604 1768710 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0817 03:20:23.115625 1768710 main.go:130] libmachine: Decoding PEM data...
	I0817 03:20:23.115640 1768710 main.go:130] libmachine: Parsing certificate...
	I0817 03:20:23.116000 1768710 cli_runner.go:115] Run: docker network inspect cilium-20210817024631-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 03:20:23.144976 1768710 cli_runner.go:162] docker network inspect cilium-20210817024631-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 03:20:23.145037 1768710 network_create.go:255] running [docker network inspect cilium-20210817024631-1554185] to gather additional debugging logs...
	I0817 03:20:23.145054 1768710 cli_runner.go:115] Run: docker network inspect cilium-20210817024631-1554185
	W0817 03:20:23.173651 1768710 cli_runner.go:162] docker network inspect cilium-20210817024631-1554185 returned with exit code 1
	I0817 03:20:23.173674 1768710 network_create.go:258] error running [docker network inspect cilium-20210817024631-1554185]: docker network inspect cilium-20210817024631-1554185: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20210817024631-1554185
	I0817 03:20:23.173686 1768710 network_create.go:260] output of [docker network inspect cilium-20210817024631-1554185]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20210817024631-1554185
	
	** /stderr **
	I0817 03:20:23.173732 1768710 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 03:20:23.202705 1768710 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x4000b86888] misses:0}
	I0817 03:20:23.202750 1768710 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 03:20:23.202773 1768710 network_create.go:106] attempt to create docker network cilium-20210817024631-1554185 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 03:20:23.202830 1768710 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20210817024631-1554185
	I0817 03:20:23.285230 1768710 network_create.go:90] docker network cilium-20210817024631-1554185 192.168.49.0/24 created
	I0817 03:20:23.285256 1768710 kic.go:106] calculated static IP "192.168.49.2" for the "cilium-20210817024631-1554185" container
	I0817 03:20:23.285321 1768710 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0817 03:20:23.325132 1768710 cli_runner.go:115] Run: docker volume create cilium-20210817024631-1554185 --label name.minikube.sigs.k8s.io=cilium-20210817024631-1554185 --label created_by.minikube.sigs.k8s.io=true
	I0817 03:20:23.370980 1768710 oci.go:102] Successfully created a docker volume cilium-20210817024631-1554185
	I0817 03:20:23.371054 1768710 cli_runner.go:115] Run: docker run --rm --name cilium-20210817024631-1554185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210817024631-1554185 --entrypoint /usr/bin/test -v cilium-20210817024631-1554185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0817 03:20:23.945730 1768710 oci.go:106] Successfully prepared a docker volume cilium-20210817024631-1554185
	W0817 03:20:23.945770 1768710 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0817 03:20:23.945778 1768710 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0817 03:20:23.945840 1768710 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 03:20:23.946127 1768710 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 03:20:23.946353 1768710 kic.go:179] Starting extracting preloaded images to volume ...
	I0817 03:20:23.946411 1768710 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cilium-20210817024631-1554185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 03:20:24.068251 1768710 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20210817024631-1554185 --name cilium-20210817024631-1554185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210817024631-1554185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20210817024631-1554185 --network cilium-20210817024631-1554185 --ip 192.168.49.2 --volume cilium-20210817024631-1554185:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0817 03:20:24.682355 1768710 cli_runner.go:115] Run: docker container inspect cilium-20210817024631-1554185 --format={{.State.Running}}
	I0817 03:20:24.740128 1768710 cli_runner.go:115] Run: docker container inspect cilium-20210817024631-1554185 --format={{.State.Status}}
	I0817 03:20:24.780769 1768710 cli_runner.go:115] Run: docker exec cilium-20210817024631-1554185 stat /var/lib/dpkg/alternatives/iptables
	I0817 03:20:24.894320 1768710 oci.go:278] the created container "cilium-20210817024631-1554185" has a running status.
	I0817 03:20:24.894349 1768710 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/cilium-20210817024631-1554185/id_rsa...
	I0817 03:20:25.490691 1768710 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/cilium-20210817024631-1554185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 03:20:25.671477 1768710 cli_runner.go:115] Run: docker container inspect cilium-20210817024631-1554185 --format={{.State.Status}}
	I0817 03:20:25.726074 1768710 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 03:20:25.726089 1768710 kic_runner.go:115] Args: [docker exec --privileged cilium-20210817024631-1554185 chown docker:docker /home/docker/.ssh/authorized_keys]
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 02:50:51 UTC, end at Tue 2021-08-17 03:20:37 UTC. --
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.468623695Z" level=info msg="RemovePodSandbox \"7d53e801511ed07e6fabcb3c88dd69fd2c4ef7c3c028e9e44605be1ffc98ba60\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.491869824Z" level=info msg="StopPodSandbox for \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.491924815Z" level=info msg="Container to stop \"fdb4bd345708970e1d90521f8d81da07a88c79e442c82c7c115a4c1c6ded93a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.491993187Z" level=info msg="TearDown network for sandbox \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\" successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.492004092Z" level=info msg="StopPodSandbox for \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.526895702Z" level=info msg="RemovePodSandbox for \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.552169798Z" level=info msg="RemovePodSandbox \"0bbec9163e7b60a5740dd92587b9eea3405e923e76a198757454c12747f5350e\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.579967120Z" level=info msg="StopPodSandbox for \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.580012288Z" level=info msg="Container to stop \"2c48aa387b60234b5845590a62ab0933aef10e3afa1695cc7f5a93e93dc5b0c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.580081120Z" level=info msg="TearDown network for sandbox \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\" successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.580094043Z" level=info msg="StopPodSandbox for \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.609914286Z" level=info msg="RemovePodSandbox for \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.620668256Z" level=info msg="RemovePodSandbox \"cda4bfc3ec1a8258bec2b4222434d745201e8c6e2463242ef21dd2ef8c095998\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.650881963Z" level=info msg="StopPodSandbox for \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.650932834Z" level=info msg="Container to stop \"86ac8067fb1b5139e8f2e23b9daa6b76aa704ec28b4c4cf6d281c7293bc4259d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.650992936Z" level=info msg="TearDown network for sandbox \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\" successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.651004021Z" level=info msg="StopPodSandbox for \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.680816075Z" level=info msg="RemovePodSandbox for \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.694511521Z" level=info msg="RemovePodSandbox \"ba00a28016fee43337a18efc704acdc96bfc6286063af6fde1abead834ffe600\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.724328999Z" level=info msg="StopPodSandbox for \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.724372674Z" level=info msg="Container to stop \"f8b050af48208844c31f77ed2dc4fc25f4633ce187e85801e393aa0fce9c1ce0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.724442655Z" level=info msg="TearDown network for sandbox \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\" successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.724453609Z" level=info msg="StopPodSandbox for \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\" returns successfully"
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.749234435Z" level=info msg="RemovePodSandbox for \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\""
	Aug 17 02:56:29 old-k8s-version-20210817024805-1554185 containerd[341]: time="2021-08-17T02:56:29.758871386Z" level=info msg="RemovePodSandbox \"5c60ec3f1b3de155036af010e19643dd86114d4c617db63cd2b593506aba8e4b\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> kernel <==
	*  03:20:37 up 11:02,  0 users,  load average: 2.34, 2.36, 1.99
	Linux old-k8s-version-20210817024805-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 02:50:51 UTC, end at Tue 2021-08-17 03:20:37 UTC. --
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.331174   44656 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.]
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.331251   44656 server.go:141] Starting to listen on 0.0.0.0:10250
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.331949   44656 server.go:343] Adding debug handlers to kubelet server.
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.332985   44656 volume_manager.go:248] Starting Kubelet Volume Manager
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: E0817 03:20:37.333289   44656 controller.go:115] failed to ensure node lease exists, will retry in 200ms, error: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/old-k8s-version-20210817024805-1554185?timeout=10s: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.337134   44656 desired_state_of_world_populator.go:130] Desired state populator starts to run
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.339219   44656 clientconn.go:440] parsed scheme: "unix"
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.339342   44656 clientconn.go:440] scheme "unix" not registered, fallback to default scheme
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.339435   44656 asm_arm64.s:1128] ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0  <nil>}]
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.339514   44656 clientconn.go:796] ClientConn switching balancer to "pick_first"
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.339623   44656 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x4000622030, CONNECTING
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.340140   44656 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x4000622030, READY
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: E0817 03:20:37.340853   44656 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.440916   44656 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.440976   44656 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: E0817 03:20:37.441183   44656 kubelet.go:2244] node "old-k8s-version-20210817024805-1554185" not found
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.443177   44656 kubelet_node_status.go:72] Attempting to register node old-k8s-version-20210817024805-1554185
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: E0817 03:20:37.443637   44656 kubelet_node_status.go:94] Unable to register node "old-k8s-version-20210817024805-1554185" with API server: Post https://control-plane.minikube.internal:8443/api/v1/nodes: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.505781   44656 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.506539   44656 cpu_manager.go:155] [cpumanager] starting with none policy
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.506561   44656 cpu_manager.go:156] [cpumanager] reconciling every 10s
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: I0817 03:20:37.506569   44656 policy_none.go:42] [cpumanager] none policy: Start
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 kubelet[44656]: F0817 03:20:37.507551   44656 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 17 03:20:37 old-k8s-version-20210817024805-1554185 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 03:20:37.861119 1770015 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-20210817030748-1554185 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-20210817030748-1554185 --alsologtostderr -v=1: exit status 80 (1.995159707s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-20210817030748-1554185 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 03:15:29.129509 1749947 out.go:298] Setting OutFile to fd 1 ...
	I0817 03:15:29.129625 1749947 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:15:29.129637 1749947 out.go:311] Setting ErrFile to fd 2...
	I0817 03:15:29.129654 1749947 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:15:29.129809 1749947 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 03:15:29.129978 1749947 out.go:305] Setting JSON to false
	I0817 03:15:29.130003 1749947 mustload.go:65] Loading cluster: no-preload-20210817030748-1554185
	I0817 03:15:29.130937 1749947 config.go:177] Loaded profile config "no-preload-20210817030748-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:15:29.131746 1749947 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:15:29.165203 1749947 host.go:66] Checking if "no-preload-20210817030748-1554185" exists ...
	I0817 03:15:29.165943 1749947 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-20210817030748-1554185 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0817 03:15:29.168095 1749947 out.go:177] * Pausing node no-preload-20210817030748-1554185 ... 
	I0817 03:15:29.168117 1749947 host.go:66] Checking if "no-preload-20210817030748-1554185" exists ...
	I0817 03:15:29.168360 1749947 ssh_runner.go:149] Run: systemctl --version
	I0817 03:15:29.168404 1749947 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:15:29.199686 1749947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:15:29.302140 1749947 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:15:29.310495 1749947 pause.go:50] kubelet running: true
	I0817 03:15:29.310536 1749947 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 03:15:29.493961 1749947 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 03:15:29.494046 1749947 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 03:15:29.591761 1749947 cri.go:76] found id: "0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe"
	I0817 03:15:29.591787 1749947 cri.go:76] found id: "555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564"
	I0817 03:15:29.591793 1749947 cri.go:76] found id: "664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44"
	I0817 03:15:29.591798 1749947 cri.go:76] found id: "6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7"
	I0817 03:15:29.591803 1749947 cri.go:76] found id: "9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18"
	I0817 03:15:29.591808 1749947 cri.go:76] found id: "db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3"
	I0817 03:15:29.591813 1749947 cri.go:76] found id: "f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d"
	I0817 03:15:29.591817 1749947 cri.go:76] found id: "aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3"
	I0817 03:15:29.591822 1749947 cri.go:76] found id: "8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1"
	I0817 03:15:29.591829 1749947 cri.go:76] found id: "1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d"
	I0817 03:15:29.591835 1749947 cri.go:76] found id: ""
	I0817 03:15:29.591889 1749947 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:15:29.652410 1749947 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe","pid":4267,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe/rootfs","created":"2021-08-17T03:15:10.336680518Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d","pid":4527,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d","rootfs":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d/rootfs","created":"2021-08-17T03:15:11.991201998Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d","pid":3989,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d/rootfs","created":"2021-08-17T03:15:08.253052758Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d","io.kubernetes.cri.sandbox-log-directory":"/var/log/po
ds/kube-system_coredns-78fcd69978-255bv_841a6924-fa23-40b4-b6b6-b9d024444fc5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2","pid":4408,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2/rootfs","created":"2021-08-17T03:15:11.038028616Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-l6fk8_48225b4b-30c2-4ed3-9c80-858b0ce448b9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe","pid":4319,"status":"running","bundle":"/run/containerd/io.containe
rd.runtime.v2.task/k8s.io/3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe/rootfs","created":"2021-08-17T03:15:10.425463634Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-wct66_ffb75899-dc46-4aaa-945c-7b87ae2e020f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51","pid":3041,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51/rootfs","created":"2021-08-17T03:14:44.254131676Z","an
notations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-20210817030748-1554185_f4c749a5d872b19b2659ad01e3fe5628"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564","pid":4068,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564/rootfs","created":"2021-08-17T03:15:08.455498375Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57b52fd566
63ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116","pid":4472,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116/rootfs","created":"2021-08-17T03:15:11.360207935Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-85bpp_dff4da5a-2bfd-4949-9ae0-1ac5b7d02599"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be","pid":3052,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be","rootfs":"/run/containerd/io.containerd
.runtime.v2.task/k8s.io/6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be/rootfs","created":"2021-08-17T03:14:44.256734406Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210817030748-1554185_23e4baa0b0064b7831401e1fa5764a24"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44","pid":3661,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44/rootfs","created":"2021-08-17T03:15:07.241646951Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sa
ndbox-id":"e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7","pid":3636,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7/rootfs","created":"2021-08-17T03:15:07.29170744Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a","pid":3093,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a","rootfs":"/r
un/containerd/io.containerd.runtime.v2.task/k8s.io/72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a/rootfs","created":"2021-08-17T03:14:44.311542933Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210817030748-1554185_1ff4996f188c76523775182248e3b8b9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631","pid":3588,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631/rootfs","created":"2021-08-17T03:15:07.093763554Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbo
x-id":"947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-r9qg6_5210d460-fdab-41e1-ad63-b434142322d6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18","pid":3220,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18/rootfs","created":"2021-08-17T03:14:44.487936554Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3","pid":3172,"status":"running","bundle":"/
run/containerd/io.containerd.runtime.v2.task/k8s.io/aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3/rootfs","created":"2021-08-17T03:14:44.419685071Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad","pid":3101,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad/rootfs","created":"2021-08-17T03:14:44.312709017Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","i
o.kubernetes.cri.sandbox-id":"d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210817030748-1554185_d28e4e86cdd24a49b7c5ce4cb15710c5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3","pid":3204,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3/rootfs","created":"2021-08-17T03:14:44.456949275Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c007
11","pid":3595,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711/rootfs","created":"2021-08-17T03:15:07.125882481Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-fd8hs_59bcc4a4-33b2-44c2-8da3-a777113aaf58"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2","pid":4218,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c
2/rootfs","created":"2021-08-17T03:15:10.176289605Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_7a0dec2f-5605-4e68-8128-d88da36ed6dd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d","pid":3170,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d/rootfs","created":"2021-08-17T03:14:44.443364199Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be"},"owner":"root"}]
	I0817 03:15:29.652721 1749947 cri.go:113] list returned 20 containers
	I0817 03:15:29.652734 1749947 cri.go:116] container: {ID:0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe Status:running}
	I0817 03:15:29.652755 1749947 cri.go:116] container: {ID:1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d Status:running}
	I0817 03:15:29.652765 1749947 cri.go:116] container: {ID:26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d Status:running}
	I0817 03:15:29.652771 1749947 cri.go:118] skipping 26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d - not in ps
	I0817 03:15:29.652778 1749947 cri.go:116] container: {ID:2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2 Status:running}
	I0817 03:15:29.652784 1749947 cri.go:118] skipping 2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2 - not in ps
	I0817 03:15:29.652794 1749947 cri.go:116] container: {ID:3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe Status:running}
	I0817 03:15:29.652799 1749947 cri.go:118] skipping 3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe - not in ps
	I0817 03:15:29.652807 1749947 cri.go:116] container: {ID:411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51 Status:running}
	I0817 03:15:29.652812 1749947 cri.go:118] skipping 411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51 - not in ps
	I0817 03:15:29.652825 1749947 cri.go:116] container: {ID:555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564 Status:running}
	I0817 03:15:29.652830 1749947 cri.go:116] container: {ID:57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116 Status:running}
	I0817 03:15:29.652836 1749947 cri.go:118] skipping 57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116 - not in ps
	I0817 03:15:29.652840 1749947 cri.go:116] container: {ID:6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be Status:running}
	I0817 03:15:29.652850 1749947 cri.go:118] skipping 6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be - not in ps
	I0817 03:15:29.652854 1749947 cri.go:116] container: {ID:664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44 Status:running}
	I0817 03:15:29.652865 1749947 cri.go:116] container: {ID:6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7 Status:running}
	I0817 03:15:29.652869 1749947 cri.go:116] container: {ID:72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a Status:running}
	I0817 03:15:29.652879 1749947 cri.go:118] skipping 72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a - not in ps
	I0817 03:15:29.652883 1749947 cri.go:116] container: {ID:947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631 Status:running}
	I0817 03:15:29.652895 1749947 cri.go:118] skipping 947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631 - not in ps
	I0817 03:15:29.652904 1749947 cri.go:116] container: {ID:9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18 Status:running}
	I0817 03:15:29.652909 1749947 cri.go:116] container: {ID:aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3 Status:running}
	I0817 03:15:29.652914 1749947 cri.go:116] container: {ID:d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad Status:running}
	I0817 03:15:29.652922 1749947 cri.go:118] skipping d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad - not in ps
	I0817 03:15:29.652930 1749947 cri.go:116] container: {ID:db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3 Status:running}
	I0817 03:15:29.652936 1749947 cri.go:116] container: {ID:e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711 Status:running}
	I0817 03:15:29.652943 1749947 cri.go:118] skipping e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711 - not in ps
	I0817 03:15:29.652949 1749947 cri.go:116] container: {ID:e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2 Status:running}
	I0817 03:15:29.652955 1749947 cri.go:118] skipping e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2 - not in ps
	I0817 03:15:29.652967 1749947 cri.go:116] container: {ID:f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d Status:running}
	I0817 03:15:29.653017 1749947 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe
	I0817 03:15:29.669165 1749947 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe 1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d
	I0817 03:15:29.690325 1749947 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe 1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T03:15:29Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0817 03:15:29.966665 1749947 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:15:29.975226 1749947 pause.go:50] kubelet running: false
	I0817 03:15:29.975270 1749947 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 03:15:30.094188 1749947 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 03:15:30.094258 1749947 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 03:15:30.165491 1749947 cri.go:76] found id: "0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe"
	I0817 03:15:30.165551 1749947 cri.go:76] found id: "555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564"
	I0817 03:15:30.165563 1749947 cri.go:76] found id: "664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44"
	I0817 03:15:30.165568 1749947 cri.go:76] found id: "6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7"
	I0817 03:15:30.165572 1749947 cri.go:76] found id: "9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18"
	I0817 03:15:30.165577 1749947 cri.go:76] found id: "db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3"
	I0817 03:15:30.165581 1749947 cri.go:76] found id: "f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d"
	I0817 03:15:30.165586 1749947 cri.go:76] found id: "aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3"
	I0817 03:15:30.165593 1749947 cri.go:76] found id: "8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1"
	I0817 03:15:30.165602 1749947 cri.go:76] found id: "1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d"
	I0817 03:15:30.165615 1749947 cri.go:76] found id: ""
	I0817 03:15:30.165659 1749947 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:15:30.211398 1749947 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe","pid":4267,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe/rootfs","created":"2021-08-17T03:15:10.336680518Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d","pid":4527,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d","rootfs":"/run/containerd/io.containerd
.runtime.v2.task/k8s.io/1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d/rootfs","created":"2021-08-17T03:15:11.991201998Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d","pid":3989,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d/rootfs","created":"2021-08-17T03:15:08.253052758Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pod
s/kube-system_coredns-78fcd69978-255bv_841a6924-fa23-40b4-b6b6-b9d024444fc5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2","pid":4408,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2/rootfs","created":"2021-08-17T03:15:11.038028616Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-l6fk8_48225b4b-30c2-4ed3-9c80-858b0ce448b9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe","pid":4319,"status":"running","bundle":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe/rootfs","created":"2021-08-17T03:15:10.425463634Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-wct66_ffb75899-dc46-4aaa-945c-7b87ae2e020f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51","pid":3041,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51/rootfs","created":"2021-08-17T03:14:44.254131676Z","ann
otations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-20210817030748-1554185_f4c749a5d872b19b2659ad01e3fe5628"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564","pid":4068,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564/rootfs","created":"2021-08-17T03:15:08.455498375Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57b52fd5666
3ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116","pid":4472,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116/rootfs","created":"2021-08-17T03:15:11.360207935Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-85bpp_dff4da5a-2bfd-4949-9ae0-1ac5b7d02599"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be","pid":3052,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be","rootfs":"/run/containerd/io.containerd.
runtime.v2.task/k8s.io/6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be/rootfs","created":"2021-08-17T03:14:44.256734406Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210817030748-1554185_23e4baa0b0064b7831401e1fa5764a24"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44","pid":3661,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44/rootfs","created":"2021-08-17T03:15:07.241646951Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.san
dbox-id":"e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7","pid":3636,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7/rootfs","created":"2021-08-17T03:15:07.29170744Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a","pid":3093,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a","rootfs":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a/rootfs","created":"2021-08-17T03:14:44.311542933Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210817030748-1554185_1ff4996f188c76523775182248e3b8b9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631","pid":3588,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631/rootfs","created":"2021-08-17T03:15:07.093763554Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox
-id":"947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-r9qg6_5210d460-fdab-41e1-ad63-b434142322d6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18","pid":3220,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18/rootfs","created":"2021-08-17T03:14:44.487936554Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3","pid":3172,"status":"running","bundle":"/r
un/containerd/io.containerd.runtime.v2.task/k8s.io/aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3/rootfs","created":"2021-08-17T03:14:44.419685071Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad","pid":3101,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad/rootfs","created":"2021-08-17T03:14:44.312709017Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io
.kubernetes.cri.sandbox-id":"d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210817030748-1554185_d28e4e86cdd24a49b7c5ce4cb15710c5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3","pid":3204,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3/rootfs","created":"2021-08-17T03:14:44.456949275Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c0071
1","pid":3595,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711/rootfs","created":"2021-08-17T03:15:07.125882481Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-fd8hs_59bcc4a4-33b2-44c2-8da3-a777113aaf58"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2","pid":4218,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2
/rootfs","created":"2021-08-17T03:15:10.176289605Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_7a0dec2f-5605-4e68-8128-d88da36ed6dd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d","pid":3170,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d/rootfs","created":"2021-08-17T03:14:44.443364199Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be"},"owner":"root"}]
	I0817 03:15:30.211638 1749947 cri.go:113] list returned 20 containers
	I0817 03:15:30.211651 1749947 cri.go:116] container: {ID:0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe Status:paused}
	I0817 03:15:30.211661 1749947 cri.go:122] skipping {0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe paused}: state = "paused", want "running"
	I0817 03:15:30.211674 1749947 cri.go:116] container: {ID:1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d Status:running}
	I0817 03:15:30.211686 1749947 cri.go:116] container: {ID:26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d Status:running}
	I0817 03:15:30.211693 1749947 cri.go:118] skipping 26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d - not in ps
	I0817 03:15:30.211703 1749947 cri.go:116] container: {ID:2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2 Status:running}
	I0817 03:15:30.211709 1749947 cri.go:118] skipping 2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2 - not in ps
	I0817 03:15:30.211713 1749947 cri.go:116] container: {ID:3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe Status:running}
	I0817 03:15:30.211722 1749947 cri.go:118] skipping 3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe - not in ps
	I0817 03:15:30.211726 1749947 cri.go:116] container: {ID:411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51 Status:running}
	I0817 03:15:30.211737 1749947 cri.go:118] skipping 411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51 - not in ps
	I0817 03:15:30.211741 1749947 cri.go:116] container: {ID:555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564 Status:running}
	I0817 03:15:30.211746 1749947 cri.go:116] container: {ID:57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116 Status:running}
	I0817 03:15:30.211760 1749947 cri.go:118] skipping 57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116 - not in ps
	I0817 03:15:30.211770 1749947 cri.go:116] container: {ID:6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be Status:running}
	I0817 03:15:30.211776 1749947 cri.go:118] skipping 6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be - not in ps
	I0817 03:15:30.211780 1749947 cri.go:116] container: {ID:664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44 Status:running}
	I0817 03:15:30.211787 1749947 cri.go:116] container: {ID:6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7 Status:running}
	I0817 03:15:30.211792 1749947 cri.go:116] container: {ID:72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a Status:running}
	I0817 03:15:30.211801 1749947 cri.go:118] skipping 72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a - not in ps
	I0817 03:15:30.211805 1749947 cri.go:116] container: {ID:947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631 Status:running}
	I0817 03:15:30.211815 1749947 cri.go:118] skipping 947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631 - not in ps
	I0817 03:15:30.211819 1749947 cri.go:116] container: {ID:9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18 Status:running}
	I0817 03:15:30.211828 1749947 cri.go:116] container: {ID:aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3 Status:running}
	I0817 03:15:30.211834 1749947 cri.go:116] container: {ID:d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad Status:running}
	I0817 03:15:30.211839 1749947 cri.go:118] skipping d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad - not in ps
	I0817 03:15:30.211843 1749947 cri.go:116] container: {ID:db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3 Status:running}
	I0817 03:15:30.211851 1749947 cri.go:116] container: {ID:e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711 Status:running}
	I0817 03:15:30.211856 1749947 cri.go:118] skipping e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711 - not in ps
	I0817 03:15:30.211866 1749947 cri.go:116] container: {ID:e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2 Status:running}
	I0817 03:15:30.211871 1749947 cri.go:118] skipping e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2 - not in ps
	I0817 03:15:30.211876 1749947 cri.go:116] container: {ID:f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d Status:running}
	I0817 03:15:30.211923 1749947 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d
	I0817 03:15:30.224928 1749947 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d 555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564
	I0817 03:15:30.237068 1749947 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d 555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T03:15:30Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0817 03:15:30.777732 1749947 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:15:30.787127 1749947 pause.go:50] kubelet running: false
	I0817 03:15:30.787223 1749947 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 03:15:30.910169 1749947 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 03:15:30.910243 1749947 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 03:15:30.979739 1749947 cri.go:76] found id: "0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe"
	I0817 03:15:30.979756 1749947 cri.go:76] found id: "555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564"
	I0817 03:15:30.979761 1749947 cri.go:76] found id: "664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44"
	I0817 03:15:30.979766 1749947 cri.go:76] found id: "6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7"
	I0817 03:15:30.979770 1749947 cri.go:76] found id: "9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18"
	I0817 03:15:30.979775 1749947 cri.go:76] found id: "db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3"
	I0817 03:15:30.979782 1749947 cri.go:76] found id: "f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d"
	I0817 03:15:30.979789 1749947 cri.go:76] found id: "aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3"
	I0817 03:15:30.979793 1749947 cri.go:76] found id: "8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1"
	I0817 03:15:30.979802 1749947 cri.go:76] found id: "1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d"
	I0817 03:15:30.979807 1749947 cri.go:76] found id: ""
	I0817 03:15:30.979852 1749947 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:15:31.030725 1749947 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe","pid":4267,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe/rootfs","created":"2021-08-17T03:15:10.336680518Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d","pid":4527,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d","rootfs":"/run/containerd/io.containerd.
runtime.v2.task/k8s.io/1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d/rootfs","created":"2021-08-17T03:15:11.991201998Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d","pid":3989,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d/rootfs","created":"2021-08-17T03:15:08.253052758Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods
/kube-system_coredns-78fcd69978-255bv_841a6924-fa23-40b4-b6b6-b9d024444fc5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2","pid":4408,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2/rootfs","created":"2021-08-17T03:15:11.038028616Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-l6fk8_48225b4b-30c2-4ed3-9c80-858b0ce448b9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe","pid":4319,"status":"running","bundle":"/run/containerd/io.containerd
.runtime.v2.task/k8s.io/3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe/rootfs","created":"2021-08-17T03:15:10.425463634Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-wct66_ffb75899-dc46-4aaa-945c-7b87ae2e020f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51","pid":3041,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51/rootfs","created":"2021-08-17T03:14:44.254131676Z","anno
tations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-20210817030748-1554185_f4c749a5d872b19b2659ad01e3fe5628"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564","pid":4068,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564/rootfs","created":"2021-08-17T03:15:08.455498375Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57b52fd56663
ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116","pid":4472,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116/rootfs","created":"2021-08-17T03:15:11.360207935Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-85bpp_dff4da5a-2bfd-4949-9ae0-1ac5b7d02599"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be","pid":3052,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be","rootfs":"/run/containerd/io.containerd.r
untime.v2.task/k8s.io/6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be/rootfs","created":"2021-08-17T03:14:44.256734406Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210817030748-1554185_23e4baa0b0064b7831401e1fa5764a24"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44","pid":3661,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44/rootfs","created":"2021-08-17T03:15:07.241646951Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sand
box-id":"e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7","pid":3636,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7/rootfs","created":"2021-08-17T03:15:07.29170744Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a","pid":3093,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a","rootfs":"/run
/containerd/io.containerd.runtime.v2.task/k8s.io/72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a/rootfs","created":"2021-08-17T03:14:44.311542933Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210817030748-1554185_1ff4996f188c76523775182248e3b8b9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631","pid":3588,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631/rootfs","created":"2021-08-17T03:15:07.093763554Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-
id":"947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-r9qg6_5210d460-fdab-41e1-ad63-b434142322d6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18","pid":3220,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18/rootfs","created":"2021-08-17T03:14:44.487936554Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3","pid":3172,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3/rootfs","created":"2021-08-17T03:14:44.419685071Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad","pid":3101,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad/rootfs","created":"2021-08-17T03:14:44.312709017Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.
kubernetes.cri.sandbox-id":"d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210817030748-1554185_d28e4e86cdd24a49b7c5ce4cb15710c5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3","pid":3204,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3/rootfs","created":"2021-08-17T03:14:44.456949275Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711
","pid":3595,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711/rootfs","created":"2021-08-17T03:15:07.125882481Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-fd8hs_59bcc4a4-33b2-44c2-8da3-a777113aaf58"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2","pid":4218,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2/
rootfs","created":"2021-08-17T03:15:10.176289605Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_7a0dec2f-5605-4e68-8128-d88da36ed6dd"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d","pid":3170,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d/rootfs","created":"2021-08-17T03:14:44.443364199Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be"},"owner":"root"}]
	I0817 03:15:31.031094 1749947 cri.go:113] list returned 20 containers
	I0817 03:15:31.031111 1749947 cri.go:116] container: {ID:0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe Status:paused}
	I0817 03:15:31.031122 1749947 cri.go:122] skipping {0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe paused}: state = "paused", want "running"
	I0817 03:15:31.031137 1749947 cri.go:116] container: {ID:1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d Status:paused}
	I0817 03:15:31.031148 1749947 cri.go:122] skipping {1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d paused}: state = "paused", want "running"
	I0817 03:15:31.031156 1749947 cri.go:116] container: {ID:26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d Status:running}
	I0817 03:15:31.031165 1749947 cri.go:118] skipping 26bc1107196ae2ac5097a0e68a4fdaf9e69a98e9aa0dc42bf64d792b611f194d - not in ps
	I0817 03:15:31.031169 1749947 cri.go:116] container: {ID:2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2 Status:running}
	I0817 03:15:31.031174 1749947 cri.go:118] skipping 2d2f70d01901980ebd48c666cd8249714955f5493754a2b7188000db3245a5c2 - not in ps
	I0817 03:15:31.031178 1749947 cri.go:116] container: {ID:3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe Status:running}
	I0817 03:15:31.031187 1749947 cri.go:118] skipping 3c84998709f20c7457719be276611c27ff65c61b6dbdd62852af8a0b874e5bbe - not in ps
	I0817 03:15:31.031192 1749947 cri.go:116] container: {ID:411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51 Status:running}
	I0817 03:15:31.031204 1749947 cri.go:118] skipping 411fb037027e5db69e06bdbc81512ee36a37103c8c249832a88f60c5f0914b51 - not in ps
	I0817 03:15:31.031208 1749947 cri.go:116] container: {ID:555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564 Status:running}
	I0817 03:15:31.031213 1749947 cri.go:116] container: {ID:57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116 Status:running}
	I0817 03:15:31.031222 1749947 cri.go:118] skipping 57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116 - not in ps
	I0817 03:15:31.031226 1749947 cri.go:116] container: {ID:6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be Status:running}
	I0817 03:15:31.031236 1749947 cri.go:118] skipping 6269a024ac283de0fdbec349ea5062ce27c1b6adf90dbd61ed51017281ed40be - not in ps
	I0817 03:15:31.031240 1749947 cri.go:116] container: {ID:664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44 Status:running}
	I0817 03:15:31.031247 1749947 cri.go:116] container: {ID:6975143cba57b0df51c2d35efdeedfb52d97f8dbc4ce8bcc5523097ab96719d7 Status:running}
	I0817 03:15:31.031254 1749947 cri.go:116] container: {ID:72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a Status:running}
	I0817 03:15:31.031260 1749947 cri.go:118] skipping 72b86153695a4e1c3ffef5c8951574bb16eb4e368a3c423b94b662fe30a4966a - not in ps
	I0817 03:15:31.031263 1749947 cri.go:116] container: {ID:947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631 Status:running}
	I0817 03:15:31.031271 1749947 cri.go:118] skipping 947112f02450ce52cf3ed55cc3aa393144cfbc3d903904d39b054259fdc0e631 - not in ps
	I0817 03:15:31.031276 1749947 cri.go:116] container: {ID:9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18 Status:running}
	I0817 03:15:31.031290 1749947 cri.go:116] container: {ID:aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3 Status:running}
	I0817 03:15:31.031295 1749947 cri.go:116] container: {ID:d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad Status:running}
	I0817 03:15:31.031300 1749947 cri.go:118] skipping d80c4c007562b4eef25bbe81e5385c31f0a1f78cf59d8acc5392ea2be161f3ad - not in ps
	I0817 03:15:31.031309 1749947 cri.go:116] container: {ID:db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3 Status:running}
	I0817 03:15:31.031314 1749947 cri.go:116] container: {ID:e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711 Status:running}
	I0817 03:15:31.031324 1749947 cri.go:118] skipping e0eb4637af52a23315d9638a01ef4cc4eb3df564e62b81a346c3030a53c00711 - not in ps
	I0817 03:15:31.031333 1749947 cri.go:116] container: {ID:e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2 Status:running}
	I0817 03:15:31.031339 1749947 cri.go:118] skipping e8863073b7ce8bebeefb7e6ada6d6acf9d460c242c2c92d8b85888ce1c4b12c2 - not in ps
	I0817 03:15:31.031344 1749947 cri.go:116] container: {ID:f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d Status:running}
	I0817 03:15:31.031384 1749947 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564
	I0817 03:15:31.044627 1749947 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564 664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44
	I0817 03:15:31.058466 1749947 out.go:177] 
	W0817 03:15:31.058591 1749947 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564 664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T03:15:31Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564 664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T03:15:31Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0817 03:15:31.058608 1749947 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0817 03:15:31.066363 1749947 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_3.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_3.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0817 03:15:31.068476 1749947 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-arm64 pause -p no-preload-20210817030748-1554185 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20210817030748-1554185
helpers_test.go:236: (dbg) docker inspect no-preload-20210817030748-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80cc9e585a53c9332b6b3abef31685390274cfaaeff6c6724d804bcae6849e4d",
	        "Created": "2021-08-17T03:07:49.718412699Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1735060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T03:09:46.379410525Z",
	            "FinishedAt": "2021-08-17T03:09:45.091566147Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/80cc9e585a53c9332b6b3abef31685390274cfaaeff6c6724d804bcae6849e4d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80cc9e585a53c9332b6b3abef31685390274cfaaeff6c6724d804bcae6849e4d/hostname",
	        "HostsPath": "/var/lib/docker/containers/80cc9e585a53c9332b6b3abef31685390274cfaaeff6c6724d804bcae6849e4d/hosts",
	        "LogPath": "/var/lib/docker/containers/80cc9e585a53c9332b6b3abef31685390274cfaaeff6c6724d804bcae6849e4d/80cc9e585a53c9332b6b3abef31685390274cfaaeff6c6724d804bcae6849e4d-json.log",
	        "Name": "/no-preload-20210817030748-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210817030748-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20210817030748-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/15e895b9138d0bdfa0e7ee3396e4f20c5d39d0a934f410d57fb83618afc25d80-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/15e895b9138d0bdfa0e7ee3396e4f20c5d39d0a934f410d57fb83618afc25d80/merged",
	                "UpperDir": "/var/lib/docker/overlay2/15e895b9138d0bdfa0e7ee3396e4f20c5d39d0a934f410d57fb83618afc25d80/diff",
	                "WorkDir": "/var/lib/docker/overlay2/15e895b9138d0bdfa0e7ee3396e4f20c5d39d0a934f410d57fb83618afc25d80/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210817030748-1554185",
	                "Source": "/var/lib/docker/volumes/no-preload-20210817030748-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210817030748-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210817030748-1554185",
	                "name.minikube.sigs.k8s.io": "no-preload-20210817030748-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a441a4132ef69378491cb7e753ee36b706e2c7f656b9cb2a62b52763b1c4b562",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50493"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50492"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50489"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50491"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50490"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a441a4132ef6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20210817030748-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "80cc9e585a53",
	                        "no-preload-20210817030748-1554185"
	                    ],
	                    "NetworkID": "cd4aea319b4395fa95af3da2082bfc6147b0843f2f50f1029b55cb467a861890",
	                    "EndpointID": "eff2a9a3f8b00faf610ca859723eaa49f986a3775044af2722829183e8342750",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210817030748-1554185 -n no-preload-20210817030748-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210817030748-1554185 -n no-preload-20210817030748-1554185: exit status 2 (391.324072ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-20210817030748-1554185 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:253: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                      Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:48:52 UTC | Tue, 17 Aug 2021 02:50:54 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:02 UTC | Tue, 17 Aug 2021 02:51:03 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:03 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:57:04 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:57:15 UTC | Tue, 17 Aug 2021 02:57:15 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:05 UTC | Tue, 17 Aug 2021 02:59:08 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:08 UTC | Tue, 17 Aug 2021 02:59:08 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:08 UTC | Tue, 17 Aug 2021 03:01:12 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:21 UTC | Tue, 17 Aug 2021 03:01:22 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:22 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:07:27 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:38 UTC | Tue, 17 Aug 2021 03:07:38 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	| -p      | embed-certs-20210817025908-1554185                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:40 UTC | Tue, 17 Aug 2021 03:07:41 UTC |
	|         | logs -n 25                                        |                                                   |         |         |                               |                               |
	| -p      | embed-certs-20210817025908-1554185                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:42 UTC | Tue, 17 Aug 2021 03:07:43 UTC |
	|         | logs -n 25                                        |                                                   |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:44 UTC | Tue, 17 Aug 2021 03:07:47 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:47 UTC | Tue, 17 Aug 2021 03:07:48 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210817030748-1554185      | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:48 UTC | Tue, 17 Aug 2021 03:07:48 UTC |
	|         | disable-driver-mounts-20210817030748-1554185      |                                                   |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:48 UTC | Tue, 17 Aug 2021 03:09:14 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:24 UTC | Tue, 17 Aug 2021 03:09:24 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:25 UTC | Tue, 17 Aug 2021 03:09:45 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:45 UTC | Tue, 17 Aug 2021 03:09:45 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:45 UTC | Tue, 17 Aug 2021 03:15:18 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:28 UTC | Tue, 17 Aug 2021 03:15:29 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 03:09:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 03:09:45.595717 1734845 out.go:298] Setting OutFile to fd 1 ...
	I0817 03:09:45.595882 1734845 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:09:45.595892 1734845 out.go:311] Setting ErrFile to fd 2...
	I0817 03:09:45.595896 1734845 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:09:45.596029 1734845 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 03:09:45.596261 1734845 out.go:305] Setting JSON to false
	I0817 03:09:45.597078 1734845 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39124,"bootTime":1629130662,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 03:09:45.597149 1734845 start.go:121] virtualization:  
	I0817 03:09:45.599691 1734845 out.go:177] * [no-preload-20210817030748-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 03:09:45.602314 1734845 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 03:09:45.601227 1734845 notify.go:169] Checking for updates...
	I0817 03:09:45.604506 1734845 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:09:45.606220 1734845 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 03:09:45.607782 1734845 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 03:09:45.608182 1734845 config.go:177] Loaded profile config "no-preload-20210817030748-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:09:45.608622 1734845 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 03:09:45.645796 1734845 docker.go:132] docker version: linux-20.10.8
	I0817 03:09:45.645869 1734845 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:09:45.761101 1734845 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:09:45.693503398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 03:09:45.761245 1734845 docker.go:244] overlay module found
	I0817 03:09:45.763624 1734845 out.go:177] * Using the docker driver based on existing profile
	I0817 03:09:45.763643 1734845 start.go:278] selected driver: docker
	I0817 03:09:45.763649 1734845 start.go:751] validating driver "docker" against &{Name:no-preload-20210817030748-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817030748-1554185 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHo
stTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:09:45.763770 1734845 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 03:09:45.763808 1734845 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:09:45.763823 1734845 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:09:45.765334 1734845 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:09:45.765622 1734845 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:09:45.879144 1734845 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:09:45.801060075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	W0817 03:09:45.879289 1734845 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:09:45.879303 1734845 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:09:45.881509 1734845 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:09:45.881598 1734845 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 03:09:45.881618 1734845 cni.go:93] Creating CNI manager for ""
	I0817 03:09:45.881625 1734845 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:09:45.881634 1734845 start_flags.go:277] config:
	{Name:no-preload-20210817030748-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817030748-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Multi
NodeRequested:false ExtraDisks:0}
	I0817 03:09:45.883980 1734845 out.go:177] * Starting control plane node no-preload-20210817030748-1554185 in cluster no-preload-20210817030748-1554185
	I0817 03:09:45.884009 1734845 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 03:09:45.885867 1734845 out.go:177] * Pulling base image ...
	I0817 03:09:45.885887 1734845 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0817 03:09:45.886004 1734845 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/config.json ...
	I0817 03:09:45.886270 1734845 cache.go:108] acquiring lock: {Name:mk632f6e0db9416813fd07fccbb58335b8e59d21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886405 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0817 03:09:45.886419 1734845 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 157.077µs
	I0817 03:09:45.886429 1734845 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0817 03:09:45.886443 1734845 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 03:09:45.886608 1734845 cache.go:108] acquiring lock: {Name:mk4fc0e92492b47d614457da59bc6dab952f8b05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886684 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0817 03:09:45.886696 1734845 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 92.225µs
	I0817 03:09:45.886705 1734845 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0817 03:09:45.886719 1734845 cache.go:108] acquiring lock: {Name:mk6dba5734dfeaf6d9d4511e98f054cac0439cfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886771 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0817 03:09:45.886780 1734845 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 62.432µs
	I0817 03:09:45.886790 1734845 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0817 03:09:45.886801 1734845 cache.go:108] acquiring lock: {Name:mkacaa9736949fc5d0494bb1d5c3531771bb3ea8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886855 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0817 03:09:45.886864 1734845 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 64.738µs
	I0817 03:09:45.886873 1734845 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0817 03:09:45.886885 1734845 cache.go:108] acquiring lock: {Name:mkf7cd9af6d882fda3a954c4eb39d82dc77cd0d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886917 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0817 03:09:45.886924 1734845 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 41.107µs
	I0817 03:09:45.886932 1734845 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0817 03:09:45.886942 1734845 cache.go:108] acquiring lock: {Name:mkeec948dbb922c159c4fc1af8656d60fa14d5a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886975 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0817 03:09:45.886984 1734845 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 42.682µs
	I0817 03:09:45.886994 1734845 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0817 03:09:45.887005 1734845 cache.go:108] acquiring lock: {Name:mkb04986d0796ebd5c4c0669e3d06018c5856bea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.887038 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0817 03:09:45.887045 1734845 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 41.862µs
	I0817 03:09:45.887053 1734845 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0817 03:09:45.887063 1734845 cache.go:108] acquiring lock: {Name:mk79883006bb65c2c14816b6b80621971bab0e63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.887095 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0817 03:09:45.887102 1734845 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 40.5µs
	I0817 03:09:45.887110 1734845 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0817 03:09:45.887121 1734845 cache.go:108] acquiring lock: {Name:mk9f3113ef4c19ec91ec377b2f94212c471844e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.887153 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0817 03:09:45.887160 1734845 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 40.606µs
	I0817 03:09:45.887170 1734845 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0817 03:09:45.887180 1734845 cache.go:108] acquiring lock: {Name:mk17550e76c320cd5e7ed26cfb8c625219e409db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.887221 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0817 03:09:45.887229 1734845 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 49.772µs
	I0817 03:09:45.887241 1734845 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0817 03:09:45.887246 1734845 cache.go:88] Successfully saved all images to host disk.
	I0817 03:09:45.961928 1734845 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 03:09:45.961949 1734845 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 03:09:45.961966 1734845 cache.go:205] Successfully downloaded all kic artifacts
	I0817 03:09:45.962001 1734845 start.go:313] acquiring machines lock for no-preload-20210817030748-1554185: {Name:mkb71c7d4561b567efc566d76b68a021481de41c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.962079 1734845 start.go:317] acquired machines lock for "no-preload-20210817030748-1554185" in 63.121µs
	I0817 03:09:45.962097 1734845 start.go:93] Skipping create...Using existing machine configuration
	I0817 03:09:45.962102 1734845 fix.go:55] fixHost starting: 
	I0817 03:09:45.962404 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:09:46.008511 1734845 fix.go:108] recreateIfNeeded on no-preload-20210817030748-1554185: state=Stopped err=<nil>
	W0817 03:09:46.008543 1734845 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 03:09:46.010730 1734845 out.go:177] * Restarting existing docker container for "no-preload-20210817030748-1554185" ...
	I0817 03:09:46.010790 1734845 cli_runner.go:115] Run: docker start no-preload-20210817030748-1554185
	I0817 03:09:46.387790 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:09:46.422718 1734845 kic.go:420] container "no-preload-20210817030748-1554185" state is running.
	I0817 03:09:46.423270 1734845 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210817030748-1554185
	I0817 03:09:46.455024 1734845 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/config.json ...
	I0817 03:09:46.455190 1734845 machine.go:88] provisioning docker machine ...
	I0817 03:09:46.455203 1734845 ubuntu.go:169] provisioning hostname "no-preload-20210817030748-1554185"
	I0817 03:09:46.455245 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:46.490850 1734845 main.go:130] libmachine: Using SSH client type: native
	I0817 03:09:46.491023 1734845 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50493 <nil> <nil>}
	I0817 03:09:46.491052 1734845 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210817030748-1554185 && echo "no-preload-20210817030748-1554185" | sudo tee /etc/hostname
	I0817 03:09:46.491652 1734845 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52816->127.0.0.1:50493: read: connection reset by peer
	I0817 03:09:49.617716 1734845 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210817030748-1554185
	
	I0817 03:09:49.617784 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:49.649149 1734845 main.go:130] libmachine: Using SSH client type: native
	I0817 03:09:49.649317 1734845 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50493 <nil> <nil>}
	I0817 03:09:49.649345 1734845 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210817030748-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210817030748-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210817030748-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 03:09:49.761988 1734845 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 03:09:49.762013 1734845 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 03:09:49.762044 1734845 ubuntu.go:177] setting up certificates
	I0817 03:09:49.762053 1734845 provision.go:83] configureAuth start
	I0817 03:09:49.762113 1734845 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210817030748-1554185
	I0817 03:09:49.805225 1734845 provision.go:138] copyHostCerts
	I0817 03:09:49.805278 1734845 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 03:09:49.805285 1734845 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 03:09:49.805341 1734845 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 03:09:49.805411 1734845 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 03:09:49.805418 1734845 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 03:09:49.805438 1734845 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 03:09:49.805482 1734845 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 03:09:49.805486 1734845 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 03:09:49.805505 1734845 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 03:09:49.805539 1734845 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210817030748-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210817030748-1554185]
	I0817 03:09:50.088826 1734845 provision.go:172] copyRemoteCerts
	I0817 03:09:50.088904 1734845 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 03:09:50.088957 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.120178 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.204693 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 03:09:50.219811 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0817 03:09:50.234847 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 03:09:50.249568 1734845 provision.go:86] duration metric: configureAuth took 487.504363ms
	I0817 03:09:50.249586 1734845 ubuntu.go:193] setting minikube options for container-runtime
	I0817 03:09:50.249747 1734845 config.go:177] Loaded profile config "no-preload-20210817030748-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:09:50.249761 1734845 machine.go:91] provisioned docker machine in 3.794563903s
	I0817 03:09:50.249768 1734845 start.go:267] post-start starting for "no-preload-20210817030748-1554185" (driver="docker")
	I0817 03:09:50.249775 1734845 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 03:09:50.249819 1734845 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 03:09:50.249855 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.280274 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.364600 1734845 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 03:09:50.366851 1734845 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 03:09:50.366874 1734845 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 03:09:50.366885 1734845 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 03:09:50.366893 1734845 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 03:09:50.366902 1734845 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 03:09:50.366958 1734845 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 03:09:50.367038 1734845 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 03:09:50.367128 1734845 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 03:09:50.372464 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:09:50.386603 1734845 start.go:270] post-start completed in 136.82351ms
	I0817 03:09:50.386665 1734845 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 03:09:50.386708 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.416929 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.499112 1734845 fix.go:57] fixHost completed within 4.537005735s
	I0817 03:09:50.499156 1734845 start.go:80] releasing machines lock for "no-preload-20210817030748-1554185", held for 4.537067921s
	I0817 03:09:50.499234 1734845 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210817030748-1554185
	I0817 03:09:50.528911 1734845 ssh_runner.go:149] Run: systemctl --version
	I0817 03:09:50.528958 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.529174 1734845 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 03:09:50.529224 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.570479 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.581923 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.789210 1734845 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 03:09:50.807188 1734845 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 03:09:50.817365 1734845 docker.go:153] disabling docker service ...
	I0817 03:09:50.817403 1734845 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 03:09:50.828441 1734845 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 03:09:50.838006 1734845 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 03:09:50.937592 1734845 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 03:09:51.044840 1734845 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 03:09:51.053358 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 03:09:51.064501 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 03:09:51.075958 1734845 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 03:09:51.081433 1734845 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 03:09:51.086713 1734845 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 03:09:51.172289 1734845 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 03:09:51.291561 1734845 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 03:09:51.291621 1734845 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 03:09:51.295154 1734845 start.go:413] Will wait 60s for crictl version
	I0817 03:09:51.295201 1734845 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:09:51.318238 1734845 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T03:09:51Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 03:10:02.365028 1734845 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:10:02.391854 1734845 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 03:10:02.391910 1734845 ssh_runner.go:149] Run: containerd --version
	I0817 03:10:02.413907 1734845 ssh_runner.go:149] Run: containerd --version
	I0817 03:10:02.437726 1734845 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0817 03:10:02.437797 1734845 cli_runner.go:115] Run: docker network inspect no-preload-20210817030748-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 03:10:02.468777 1734845 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 03:10:02.471802 1734845 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:10:02.480305 1734845 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0817 03:10:02.480343 1734845 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:10:02.504624 1734845 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:10:02.504642 1734845 cache_images.go:74] Images are preloaded, skipping loading
	I0817 03:10:02.504681 1734845 ssh_runner.go:149] Run: sudo crictl info
	I0817 03:10:02.525951 1734845 cni.go:93] Creating CNI manager for ""
	I0817 03:10:02.525972 1734845 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:10:02.525982 1734845 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 03:10:02.525994 1734845 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210817030748-1554185 NodeName:no-preload-20210817030748-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupf
s ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 03:10:02.526120 1734845 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20210817030748-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 03:10:02.526201 1734845 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20210817030748-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817030748-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 03:10:02.526251 1734845 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0817 03:10:02.532165 1734845 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 03:10:02.532209 1734845 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 03:10:02.538076 1734845 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (583 bytes)
	I0817 03:10:02.549244 1734845 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 03:10:02.559733 1734845 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2088 bytes)
	I0817 03:10:02.570496 1734845 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 03:10:02.572893 1734845 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:10:02.580287 1734845 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185 for IP: 192.168.49.2
	I0817 03:10:02.580356 1734845 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 03:10:02.580376 1734845 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 03:10:02.580418 1734845 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.key
	I0817 03:10:02.580452 1734845 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/apiserver.key.dd3b5fb2
	I0817 03:10:02.580472 1734845 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/proxy-client.key
	I0817 03:10:02.580563 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 03:10:02.580621 1734845 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 03:10:02.580635 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 03:10:02.580658 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 03:10:02.580690 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 03:10:02.580716 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 03:10:02.580762 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:10:02.581815 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 03:10:02.596113 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 03:10:02.610196 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 03:10:02.624534 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 03:10:02.638602 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 03:10:02.652883 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 03:10:02.667765 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 03:10:02.682078 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 03:10:02.696076 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 03:10:02.710132 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 03:10:02.724104 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 03:10:02.741265 1734845 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 03:10:02.753135 1734845 ssh_runner.go:149] Run: openssl version
	I0817 03:10:02.758662 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 03:10:02.766005 1734845 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 03:10:02.769984 1734845 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 03:10:02.770058 1734845 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 03:10:02.774787 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 03:10:02.782462 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 03:10:02.788967 1734845 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 03:10:02.791943 1734845 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 03:10:02.792016 1734845 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 03:10:02.796538 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 03:10:02.803506 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 03:10:02.810234 1734845 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:10:02.813269 1734845 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:10:02.813341 1734845 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:10:02.817670 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 03:10:02.826139 1734845 kubeadm.go:390] StartCluster: {Name:no-preload-20210817030748-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817030748-1554185 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Sched
uledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:10:02.826272 1734845 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 03:10:02.829083 1734845 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:10:02.860893 1734845 cri.go:76] found id: "5fd95b123978dca4b6ab24c1dd9e6d59c36548c5b94e6654ec27ef56af4fe88d"
	I0817 03:10:02.860910 1734845 cri.go:76] found id: "9c060387fd4d3d313109509ecd44d3a743c106457f9d93241bffab2e2b93f919"
	I0817 03:10:02.860916 1734845 cri.go:76] found id: "36c454d274c3ef067156d2fbbd72f2376c54830a080725345eefa31afc181bd5"
	I0817 03:10:02.860920 1734845 cri.go:76] found id: "d54ec4ef3496e2491faab4264d90e872e94c939c42dd24df1760f57387e81499"
	I0817 03:10:02.860925 1734845 cri.go:76] found id: "cb3ffc6bc33b5a46ec1025b4e24ecb9fc89717bd91aada6c0e170078704aaf60"
	I0817 03:10:02.860931 1734845 cri.go:76] found id: "761d25293d436e790a2b65ed62a103031486ccf71492f2c69cc59f70713dea63"
	I0817 03:10:02.860938 1734845 cri.go:76] found id: "55af062ccd49e046278cc903ed18f62c05456d1ce2e71712fedbb3d9785dd68f"
	I0817 03:10:02.860943 1734845 cri.go:76] found id: "4432107cb17de93765fa65b6065434f94d55510e3066cc7a7919426783139edc"
	I0817 03:10:02.860947 1734845 cri.go:76] found id: ""
	I0817 03:10:02.860986 1734845 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:10:02.876381 1734845 cri.go:103] JSON = null
	W0817 03:10:02.876425 1734845 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0817 03:10:02.876467 1734845 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 03:10:02.883651 1734845 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 03:10:02.883665 1734845 kubeadm.go:600] restartCluster start
	I0817 03:10:02.883716 1734845 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 03:10:02.891213 1734845 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:02.892046 1734845 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210817030748-1554185" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:10:02.892308 1734845 kubeconfig.go:128] "no-preload-20210817030748-1554185" context is missing from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0817 03:10:02.892839 1734845 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:10:02.895523 1734845 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 03:10:02.902064 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:02.902116 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:02.911081 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.111403 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.111554 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.120856 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.312115 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.312186 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.321126 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.511190 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.511278 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.520092 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.711390 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.711449 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.720190 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.911517 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.911590 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.927188 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.111420 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.111475 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.120528 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.311784 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.311847 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.320515 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.511725 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.511806 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.520331 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.711571 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.711653 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.720228 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.911498 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.911603 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.922093 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.111424 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.111494 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.120830 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.312038 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.312096 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.320765 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.512027 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.512071 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.520648 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.712032 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.712087 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.720826 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.911883 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.911985 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.923085 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.923128 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.923192 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.932466 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.932514 1734845 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0817 03:10:05.932533 1734845 kubeadm.go:1032] stopping kube-system containers ...
	I0817 03:10:05.932552 1734845 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 03:10:05.932619 1734845 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:10:05.959487 1734845 cri.go:76] found id: "5fd95b123978dca4b6ab24c1dd9e6d59c36548c5b94e6654ec27ef56af4fe88d"
	I0817 03:10:05.959505 1734845 cri.go:76] found id: "9c060387fd4d3d313109509ecd44d3a743c106457f9d93241bffab2e2b93f919"
	I0817 03:10:05.959510 1734845 cri.go:76] found id: "36c454d274c3ef067156d2fbbd72f2376c54830a080725345eefa31afc181bd5"
	I0817 03:10:05.959517 1734845 cri.go:76] found id: "d54ec4ef3496e2491faab4264d90e872e94c939c42dd24df1760f57387e81499"
	I0817 03:10:05.959521 1734845 cri.go:76] found id: "cb3ffc6bc33b5a46ec1025b4e24ecb9fc89717bd91aada6c0e170078704aaf60"
	I0817 03:10:05.959526 1734845 cri.go:76] found id: "761d25293d436e790a2b65ed62a103031486ccf71492f2c69cc59f70713dea63"
	I0817 03:10:05.959534 1734845 cri.go:76] found id: "55af062ccd49e046278cc903ed18f62c05456d1ce2e71712fedbb3d9785dd68f"
	I0817 03:10:05.959539 1734845 cri.go:76] found id: "4432107cb17de93765fa65b6065434f94d55510e3066cc7a7919426783139edc"
	I0817 03:10:05.959548 1734845 cri.go:76] found id: ""
	I0817 03:10:05.959553 1734845 cri.go:221] Stopping containers: [5fd95b123978dca4b6ab24c1dd9e6d59c36548c5b94e6654ec27ef56af4fe88d 9c060387fd4d3d313109509ecd44d3a743c106457f9d93241bffab2e2b93f919 36c454d274c3ef067156d2fbbd72f2376c54830a080725345eefa31afc181bd5 d54ec4ef3496e2491faab4264d90e872e94c939c42dd24df1760f57387e81499 cb3ffc6bc33b5a46ec1025b4e24ecb9fc89717bd91aada6c0e170078704aaf60 761d25293d436e790a2b65ed62a103031486ccf71492f2c69cc59f70713dea63 55af062ccd49e046278cc903ed18f62c05456d1ce2e71712fedbb3d9785dd68f 4432107cb17de93765fa65b6065434f94d55510e3066cc7a7919426783139edc]
	I0817 03:10:05.959598 1734845 ssh_runner.go:149] Run: which crictl
	I0817 03:10:05.962096 1734845 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 5fd95b123978dca4b6ab24c1dd9e6d59c36548c5b94e6654ec27ef56af4fe88d 9c060387fd4d3d313109509ecd44d3a743c106457f9d93241bffab2e2b93f919 36c454d274c3ef067156d2fbbd72f2376c54830a080725345eefa31afc181bd5 d54ec4ef3496e2491faab4264d90e872e94c939c42dd24df1760f57387e81499 cb3ffc6bc33b5a46ec1025b4e24ecb9fc89717bd91aada6c0e170078704aaf60 761d25293d436e790a2b65ed62a103031486ccf71492f2c69cc59f70713dea63 55af062ccd49e046278cc903ed18f62c05456d1ce2e71712fedbb3d9785dd68f 4432107cb17de93765fa65b6065434f94d55510e3066cc7a7919426783139edc
	I0817 03:10:05.985016 1734845 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 03:10:05.994165 1734845 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 03:10:06.000009 1734845 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 17 03:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 17 03:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2071 Aug 17 03:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 17 03:08 /etc/kubernetes/scheduler.conf
	
	I0817 03:10:06.000052 1734845 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0817 03:10:06.005722 1734845 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0817 03:10:06.011325 1734845 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0817 03:10:06.016859 1734845 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:06.016897 1734845 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 03:10:06.022694 1734845 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0817 03:10:06.028263 1734845 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:06.028305 1734845 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 03:10:06.034042 1734845 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 03:10:06.039704 1734845 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 03:10:06.039723 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:06.082382 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:08.615294 1734845 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.532882729s)
	I0817 03:10:08.615317 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:08.767910 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:08.915113 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:08.981033 1734845 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:10:08.981090 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:09.491936 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:09.991932 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:10.492229 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:10.991690 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:11.491572 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:11.991560 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:12.491549 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:12.992465 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:13.491498 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:13.991910 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:14.492177 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:14.991942 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:15.492364 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:15.511549 1734845 api_server.go:70] duration metric: took 6.530524968s to wait for apiserver process to appear ...
	I0817 03:10:15.511565 1734845 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:10:15.511573 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:20.514891 1734845 api_server.go:255] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 03:10:21.015169 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:21.565807 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 03:10:21.565826 1734845 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 03:10:22.015051 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:22.041924 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:10:22.041942 1734845 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:10:22.515122 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:22.524921 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:10:22.524982 1734845 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:10:23.015376 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:23.031209 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:10:23.058291 1734845 api_server.go:139] control plane version: v1.22.0-rc.0
	I0817 03:10:23.058308 1734845 api_server.go:129] duration metric: took 7.546737318s to wait for apiserver health ...
	I0817 03:10:23.058317 1734845 cni.go:93] Creating CNI manager for ""
	I0817 03:10:23.058324 1734845 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:10:23.061243 1734845 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 03:10:23.061294 1734845 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 03:10:23.065558 1734845 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0817 03:10:23.065571 1734845 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 03:10:23.111700 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 03:10:23.541478 1734845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:10:23.554265 1734845 system_pods.go:59] 9 kube-system pods found
	I0817 03:10:23.554329 1734845 system_pods.go:61] "coredns-78fcd69978-nxgmv" [e5cfb032-8c57-472c-8433-778c79a640b2] Running
	I0817 03:10:23.554348 1734845 system_pods.go:61] "etcd-no-preload-20210817030748-1554185" [a8887420-4d93-40e6-98dc-1983e6a39b00] Running
	I0817 03:10:23.554366 1734845 system_pods.go:61] "kindnet-w55nn" [b64f1d5a-7c2e-44a2-bb39-0461eb1fc34f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 03:10:23.554381 1734845 system_pods.go:61] "kube-apiserver-no-preload-20210817030748-1554185" [e4ac61de-aae2-40be-8dd1-8de97f9fbbf0] Running
	I0817 03:10:23.554399 1734845 system_pods.go:61] "kube-controller-manager-no-preload-20210817030748-1554185" [80d8992e-cee6-4d6c-9a3c-02efe38509c3] Running
	I0817 03:10:23.554425 1734845 system_pods.go:61] "kube-proxy-2wcnd" [98d1ffc4-ef5d-4686-85c5-e6c7c706a5d0] Running
	I0817 03:10:23.554446 1734845 system_pods.go:61] "kube-scheduler-no-preload-20210817030748-1554185" [da680647-558b-4c7f-9ea4-0493359ec794] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 03:10:23.554463 1734845 system_pods.go:61] "metrics-server-7c784ccb57-g4znl" [f28ee3e1-229f-43f7-a493-4ad334a03e12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:10:23.554479 1734845 system_pods.go:61] "storage-provisioner" [c8fcde2f-327e-462a-8883-25cd16bd9a0f] Running
	I0817 03:10:23.554495 1734845 system_pods.go:74] duration metric: took 13.002435ms to wait for pod list to return data ...
	I0817 03:10:23.554512 1734845 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:10:23.558744 1734845 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:10:23.558803 1734845 node_conditions.go:123] node cpu capacity is 2
	I0817 03:10:23.558880 1734845 node_conditions.go:105] duration metric: took 4.351282ms to run NodePressure ...
	I0817 03:10:23.558910 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:23.890429 1734845 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0817 03:10:23.895047 1734845 kubeadm.go:746] kubelet initialised
	I0817 03:10:23.895068 1734845 kubeadm.go:747] duration metric: took 4.62177ms waiting for restarted kubelet to initialise ...
	I0817 03:10:23.895075 1734845 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:10:23.901002 1734845 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:25.915651 1734845 pod_ready.go:102] pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:28.415696 1734845 pod_ready.go:102] pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:30.914925 1734845 pod_ready.go:92] pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:30.914950 1734845 pod_ready.go:81] duration metric: took 7.013913856s waiting for pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:30.914960 1734845 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:31.424098 1734845 pod_ready.go:92] pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:31.424117 1734845 pod_ready.go:81] duration metric: took 509.148838ms waiting for pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:31.424129 1734845 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.436384 1734845 pod_ready.go:92] pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:32.436405 1734845 pod_ready.go:81] duration metric: took 1.012268093s waiting for pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.436416 1734845 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.440968 1734845 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:32.440984 1734845 pod_ready.go:81] duration metric: took 4.56056ms waiting for pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.441001 1734845 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2wcnd" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.445649 1734845 pod_ready.go:92] pod "kube-proxy-2wcnd" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:32.445666 1734845 pod_ready.go:81] duration metric: took 4.656387ms waiting for pod "kube-proxy-2wcnd" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.445674 1734845 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.513148 1734845 pod_ready.go:92] pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:32.513166 1734845 pod_ready.go:81] duration metric: took 67.484919ms waiting for pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.513175 1734845 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:34.918735 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:36.919489 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:39.422799 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:41.992397 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:44.427232 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:46.918669 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:48.918991 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:50.919214 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:53.421151 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:55.918970 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:58.420373 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:00.926327 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:03.424380 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:05.919407 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:08.419192 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:10.419678 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:12.918432 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:14.919796 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:17.418745 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:19.420001 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:21.918548 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:23.919596 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:26.419907 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:28.423199 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:30.919598 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:33.419084 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:35.422014 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:37.918235 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:39.919565 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:42.418423 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:44.418583 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:46.919733 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:49.420435 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:51.919638 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:53.923772 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:56.418260 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:58.423157 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:00.919146 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:03.418904 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:05.919056 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:08.418918 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:10.919142 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:13.418600 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:15.419023 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:17.919206 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:20.417842 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:22.418846 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:24.418955 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:26.919808 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:29.418685 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:31.418797 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:33.919198 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:36.418764 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:38.920189 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:41.418574 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:43.918305 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:45.919075 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:48.418664 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:50.919726 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:53.419069 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:55.919189 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:58.418564 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:00.919982 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:03.418698 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:05.919946 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:08.418314 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:10.418898 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:12.918325 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:14.920960 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:17.417730 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:19.418086 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:21.423527 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:23.919120 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:25.919234 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:28.418290 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:30.418955 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:32.918837 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:34.918880 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:36.919626 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:39.418715 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:41.919114 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:44.418476 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:46.418703 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:48.418748 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:50.918940 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:52.919109 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:54.924520 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:57.417571 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:59.418960 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:01.918890 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:03.918954 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:05.919306 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:08.418535 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:10.919974 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:13.417849 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:15.418070 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:17.418845 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:19.919781 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:22.418143 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:24.418467 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:26.919554 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:29.420231 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:31.919823 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:32.914868 1734845 pod_ready.go:81] duration metric: took 4m0.401677923s waiting for pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace to be "Ready" ...
	E0817 03:14:32.914891 1734845 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0817 03:14:32.914910 1734845 pod_ready.go:38] duration metric: took 4m9.019812141s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:14:32.914976 1734845 kubeadm.go:604] restartCluster took 4m30.03130602s
	W0817 03:14:32.915121 1734845 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 03:14:32.915162 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0817 03:14:35.085846 1734845 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.170660328s)
	I0817 03:14:35.085904 1734845 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0817 03:14:35.095598 1734845 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 03:14:35.095652 1734845 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:14:35.117507 1734845 cri.go:76] found id: ""
	I0817 03:14:35.117555 1734845 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 03:14:35.123467 1734845 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 03:14:35.123518 1734845 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 03:14:35.129250 1734845 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 03:14:35.129282 1734845 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 03:14:35.417431 1734845 out.go:204]   - Generating certificates and keys ...
	I0817 03:14:37.693289 1734845 out.go:204]   - Booting up control plane ...
	I0817 03:14:53.789781 1734845 out.go:204]   - Configuring RBAC rules ...
	I0817 03:14:54.271378 1734845 cni.go:93] Creating CNI manager for ""
	I0817 03:14:54.271399 1734845 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:14:54.273315 1734845 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 03:14:54.273375 1734845 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 03:14:54.289118 1734845 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0817 03:14:54.289136 1734845 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 03:14:54.301138 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 03:14:54.523108 1734845 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 03:14:54.523216 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:54.523274 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=no-preload-20210817030748-1554185 minikube.k8s.io/updated_at=2021_08_17T03_14_54_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:54.649615 1734845 ops.go:34] apiserver oom_adj: -16
	I0817 03:14:54.649729 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:55.213823 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:55.714158 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:56.213831 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:56.713761 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:57.213588 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:57.713978 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:58.213925 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:58.714067 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:59.213543 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:59.713381 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:00.213285 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:00.714139 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:01.213576 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:01.713429 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:02.213362 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:02.713600 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:03.213826 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:03.713770 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:04.214126 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:04.714144 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:05.213303 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:05.713594 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:06.213610 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:06.713444 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:06.819820 1734845 kubeadm.go:985] duration metric: took 12.296644721s to wait for elevateKubeSystemPrivileges.
	I0817 03:15:06.819846 1734845 kubeadm.go:392] StartCluster complete in 5m3.993715814s
	I0817 03:15:06.819864 1734845 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:15:06.819944 1734845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:15:06.820939 1734845 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:15:07.350004 1734845 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210817030748-1554185" rescaled to 1
	I0817 03:15:07.350073 1734845 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0817 03:15:07.350125 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 03:15:07.350375 1734845 config.go:177] Loaded profile config "no-preload-20210817030748-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:15:07.350459 1734845 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0817 03:15:07.350509 1734845 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210817030748-1554185"
	I0817 03:15:07.350521 1734845 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210817030748-1554185"
	W0817 03:15:07.350526 1734845 addons.go:147] addon storage-provisioner should already be in state true
	I0817 03:15:07.350549 1734845 host.go:66] Checking if "no-preload-20210817030748-1554185" exists ...
	I0817 03:15:07.351057 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:15:07.353511 1734845 out.go:177] * Verifying Kubernetes components...
	I0817 03:15:07.353571 1734845 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:15:07.351676 1734845 addons.go:59] Setting metrics-server=true in profile "no-preload-20210817030748-1554185"
	I0817 03:15:07.353644 1734845 addons.go:135] Setting addon metrics-server=true in "no-preload-20210817030748-1554185"
	W0817 03:15:07.353655 1734845 addons.go:147] addon metrics-server should already be in state true
	I0817 03:15:07.353678 1734845 host.go:66] Checking if "no-preload-20210817030748-1554185" exists ...
	I0817 03:15:07.354126 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:15:07.351687 1734845 addons.go:59] Setting dashboard=true in profile "no-preload-20210817030748-1554185"
	I0817 03:15:07.354373 1734845 addons.go:135] Setting addon dashboard=true in "no-preload-20210817030748-1554185"
	W0817 03:15:07.354386 1734845 addons.go:147] addon dashboard should already be in state true
	I0817 03:15:07.354407 1734845 host.go:66] Checking if "no-preload-20210817030748-1554185" exists ...
	I0817 03:15:07.354872 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:15:07.351695 1734845 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210817030748-1554185"
	I0817 03:15:07.354952 1734845 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210817030748-1554185"
	I0817 03:15:07.355167 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:15:07.554093 1734845 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 03:15:07.554191 1734845 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:15:07.554200 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 03:15:07.554249 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:15:07.556650 1734845 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0817 03:15:07.556714 1734845 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 03:15:07.556727 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 03:15:07.556780 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:15:07.561054 1734845 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0817 03:15:07.563924 1734845 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0817 03:15:07.563974 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0817 03:15:07.563986 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0817 03:15:07.564043 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:15:07.674885 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:15:07.680743 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:15:07.689341 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:15:07.704911 1734845 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210817030748-1554185"
	W0817 03:15:07.704930 1734845 addons.go:147] addon default-storageclass should already be in state true
	I0817 03:15:07.704954 1734845 host.go:66] Checking if "no-preload-20210817030748-1554185" exists ...
	I0817 03:15:07.705396 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:15:07.760674 1734845 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 03:15:07.760699 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 03:15:07.760750 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:15:07.828462 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:15:07.875139 1734845 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210817030748-1554185" to be "Ready" ...
	I0817 03:15:07.877871 1734845 node_ready.go:49] node "no-preload-20210817030748-1554185" has status "Ready":"True"
	I0817 03:15:07.877883 1734845 node_ready.go:38] duration metric: took 2.71825ms waiting for node "no-preload-20210817030748-1554185" to be "Ready" ...
	I0817 03:15:07.877892 1734845 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:15:07.879278 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 03:15:07.884771 1734845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-255bv" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:08.039933 1734845 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 03:15:08.039955 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0817 03:15:08.143347 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0817 03:15:08.143400 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0817 03:15:08.157756 1734845 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:15:08.166354 1734845 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 03:15:08.208956 1734845 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 03:15:08.208980 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 03:15:08.307288 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0817 03:15:08.307312 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0817 03:15:08.521940 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0817 03:15:08.521964 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0817 03:15:08.541828 1734845 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 03:15:08.541851 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 03:15:08.600817 1734845 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 03:15:08.687224 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0817 03:15:08.687290 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0817 03:15:08.911998 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0817 03:15:08.912020 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0817 03:15:08.925863 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0817 03:15:08.925881 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0817 03:15:08.991825 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0817 03:15:08.991848 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0817 03:15:09.005005 1734845 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.125697496s)
	I0817 03:15:09.005031 1734845 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0817 03:15:09.062106 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0817 03:15:09.062127 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0817 03:15:09.077182 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 03:15:09.077201 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0817 03:15:09.120420 1734845 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 03:15:09.499719 1734845 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210817030748-1554185"
	I0817 03:15:09.942558 1734845 pod_ready.go:102] pod "coredns-78fcd69978-255bv" in "kube-system" namespace has status "Ready":"False"
	I0817 03:15:10.208986 1734845 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.088525772s)
	I0817 03:15:10.210980 1734845 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0817 03:15:10.211007 1734845 addons.go:344] enableAddons completed in 2.860552731s
	I0817 03:15:12.394778 1734845 pod_ready.go:102] pod "coredns-78fcd69978-255bv" in "kube-system" namespace has status "Ready":"False"
	I0817 03:15:14.395328 1734845 pod_ready.go:102] pod "coredns-78fcd69978-255bv" in "kube-system" namespace has status "Ready":"False"
	I0817 03:15:15.894556 1734845 pod_ready.go:92] pod "coredns-78fcd69978-255bv" in "kube-system" namespace has status "Ready":"True"
	I0817 03:15:15.894578 1734845 pod_ready.go:81] duration metric: took 8.009762641s waiting for pod "coredns-78fcd69978-255bv" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:15.894587 1734845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-9dmpz" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.900840 1734845 pod_ready.go:97] error getting pod "coredns-78fcd69978-9dmpz" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-9dmpz" not found
	I0817 03:15:16.900870 1734845 pod_ready.go:81] duration metric: took 1.006275778s waiting for pod "coredns-78fcd69978-9dmpz" in "kube-system" namespace to be "Ready" ...
	E0817 03:15:16.900879 1734845 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-9dmpz" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-9dmpz" not found
	I0817 03:15:16.900886 1734845 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.904873 1734845 pod_ready.go:92] pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:15:16.904892 1734845 pod_ready.go:81] duration metric: took 3.996071ms waiting for pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.904904 1734845 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.908416 1734845 pod_ready.go:92] pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:15:16.908436 1734845 pod_ready.go:81] duration metric: took 3.523894ms waiting for pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.908444 1734845 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.912269 1734845 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:15:16.912290 1734845 pod_ready.go:81] duration metric: took 3.839026ms waiting for pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.912300 1734845 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fd8hs" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.916134 1734845 pod_ready.go:92] pod "kube-proxy-fd8hs" in "kube-system" namespace has status "Ready":"True"
	I0817 03:15:16.916149 1734845 pod_ready.go:81] duration metric: took 3.815961ms waiting for pod "kube-proxy-fd8hs" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.916157 1734845 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:17.295352 1734845 pod_ready.go:92] pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:15:17.295374 1734845 pod_ready.go:81] duration metric: took 379.209584ms waiting for pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:17.295382 1734845 pod_ready.go:38] duration metric: took 9.417480039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:15:17.295425 1734845 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:15:17.295482 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:15:17.313198 1734845 api_server.go:70] duration metric: took 9.963092191s to wait for apiserver process to appear ...
	I0817 03:15:17.313220 1734845 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:15:17.313229 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:15:17.321429 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:15:17.322195 1734845 api_server.go:139] control plane version: v1.22.0-rc.0
	I0817 03:15:17.322234 1734845 api_server.go:129] duration metric: took 8.988733ms to wait for apiserver health ...
	I0817 03:15:17.322243 1734845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:15:17.495925 1734845 system_pods.go:59] 9 kube-system pods found
	I0817 03:15:17.495960 1734845 system_pods.go:61] "coredns-78fcd69978-255bv" [841a6924-fa23-40b4-b6b6-b9d024444fc5] Running
	I0817 03:15:17.495966 1734845 system_pods.go:61] "etcd-no-preload-20210817030748-1554185" [83af6952-189b-4ce5-8707-0d14b34f2838] Running
	I0817 03:15:17.495985 1734845 system_pods.go:61] "kindnet-r9qg6" [5210d460-fdab-41e1-ad63-b434142322d6] Running
	I0817 03:15:17.495997 1734845 system_pods.go:61] "kube-apiserver-no-preload-20210817030748-1554185" [e4347f56-14c8-4cd6-a3fc-9d8a0caf0a8f] Running
	I0817 03:15:17.496002 1734845 system_pods.go:61] "kube-controller-manager-no-preload-20210817030748-1554185" [f43867c6-c471-4383-81f9-5b8231a5b73c] Running
	I0817 03:15:17.496015 1734845 system_pods.go:61] "kube-proxy-fd8hs" [59bcc4a4-33b2-44c2-8da3-a777113aaf58] Running
	I0817 03:15:17.496033 1734845 system_pods.go:61] "kube-scheduler-no-preload-20210817030748-1554185" [116e59d0-36e8-41fa-bd94-80d88aa2b8ce] Running
	I0817 03:15:17.496046 1734845 system_pods.go:61] "metrics-server-7c784ccb57-wct66" [ffb75899-dc46-4aaa-945c-7b87ae2e020f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:15:17.496058 1734845 system_pods.go:61] "storage-provisioner" [7a0dec2f-5605-4e68-8128-d88da36ed6dd] Running
	I0817 03:15:17.496070 1734845 system_pods.go:74] duration metric: took 173.818396ms to wait for pod list to return data ...
	I0817 03:15:17.496081 1734845 default_sa.go:34] waiting for default service account to be created ...
	I0817 03:15:17.699897 1734845 default_sa.go:45] found service account: "default"
	I0817 03:15:17.699921 1734845 default_sa.go:55] duration metric: took 203.834329ms for default service account to be created ...
	I0817 03:15:17.699930 1734845 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 03:15:17.894490 1734845 system_pods.go:86] 9 kube-system pods found
	I0817 03:15:17.894524 1734845 system_pods.go:89] "coredns-78fcd69978-255bv" [841a6924-fa23-40b4-b6b6-b9d024444fc5] Running
	I0817 03:15:17.894531 1734845 system_pods.go:89] "etcd-no-preload-20210817030748-1554185" [83af6952-189b-4ce5-8707-0d14b34f2838] Running
	I0817 03:15:17.894536 1734845 system_pods.go:89] "kindnet-r9qg6" [5210d460-fdab-41e1-ad63-b434142322d6] Running
	I0817 03:15:17.894555 1734845 system_pods.go:89] "kube-apiserver-no-preload-20210817030748-1554185" [e4347f56-14c8-4cd6-a3fc-9d8a0caf0a8f] Running
	I0817 03:15:17.894583 1734845 system_pods.go:89] "kube-controller-manager-no-preload-20210817030748-1554185" [f43867c6-c471-4383-81f9-5b8231a5b73c] Running
	I0817 03:15:17.894595 1734845 system_pods.go:89] "kube-proxy-fd8hs" [59bcc4a4-33b2-44c2-8da3-a777113aaf58] Running
	I0817 03:15:17.894601 1734845 system_pods.go:89] "kube-scheduler-no-preload-20210817030748-1554185" [116e59d0-36e8-41fa-bd94-80d88aa2b8ce] Running
	I0817 03:15:17.894617 1734845 system_pods.go:89] "metrics-server-7c784ccb57-wct66" [ffb75899-dc46-4aaa-945c-7b87ae2e020f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:15:17.894628 1734845 system_pods.go:89] "storage-provisioner" [7a0dec2f-5605-4e68-8128-d88da36ed6dd] Running
	I0817 03:15:17.894636 1734845 system_pods.go:126] duration metric: took 194.70122ms to wait for k8s-apps to be running ...
	I0817 03:15:17.894658 1734845 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 03:15:17.894718 1734845 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:15:17.903613 1734845 system_svc.go:56] duration metric: took 8.962461ms WaitForService to wait for kubelet.
	I0817 03:15:17.903634 1734845 kubeadm.go:547] duration metric: took 10.553530802s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 03:15:17.903673 1734845 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:15:18.094517 1734845 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:15:18.094571 1734845 node_conditions.go:123] node cpu capacity is 2
	I0817 03:15:18.094594 1734845 node_conditions.go:105] duration metric: took 190.916323ms to run NodePressure ...
	I0817 03:15:18.094613 1734845 start.go:231] waiting for startup goroutines ...
	I0817 03:15:18.148145 1734845 start.go:462] kubectl: 1.21.3, cluster: 1.22.0-rc.0 (minor skew: 1)
	I0817 03:15:18.150374 1734845 out.go:177] * Done! kubectl is now configured to use "no-preload-20210817030748-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	8ca174a35e533       523cad1a4df73       15 seconds ago      Exited              dashboard-metrics-scraper   1                   57b52fd56663c
	1c1aaa8275d7f       85e6c0cff043f       20 seconds ago      Running             kubernetes-dashboard        0                   2d2f70d019019
	0afb6f1b41c82       66749159455b3       21 seconds ago      Running             storage-provisioner         0                   e8863073b7ce8
	555538de5495c       6d3ffc2696ac2       23 seconds ago      Running             coredns                     0                   26bc1107196ae
	664e7d8467668       5f7fafb97c956       24 seconds ago      Running             kube-proxy                  0                   e0eb4637af52a
	6975143cba57b       f37b7c809e5dc       24 seconds ago      Running             kindnet-cni                 0                   947112f02450c
	9f6890fa1f8d4       41065afd0ca8b       47 seconds ago      Running             kube-controller-manager     2                   72b86153695a4
	db8ca1b5b254e       82ecd1e357878       47 seconds ago      Running             kube-scheduler              2                   d80c4c007562b
	f77fa64a35d3f       2252d5eb703b0       47 seconds ago      Running             etcd                        2                   6269a024ac283
	aede5058d779c       6fe8178781397       47 seconds ago      Running             kube-apiserver              2                   411fb037027e5
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 03:09:46 UTC, end at Tue 2021-08-17 03:15:32 UTC. --
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.069313823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.070069179Z" level=info msg="PullImage \"k8s.gcr.io/echoserver:1.4\" returns image reference \"sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.074177478Z" level=info msg="CreateContainer within sandbox \"57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.387131127Z" level=info msg="CreateContainer within sandbox \"57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.387612968Z" level=info msg="StartContainer for \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.462785358Z" level=info msg="Finish piping stderr of container \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.463072108Z" level=info msg="Finish piping stdout of container \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.465435760Z" level=info msg="StartContainer for \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\" returns successfully"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.465570807Z" level=info msg="TaskExit event &TaskExit{ContainerID:8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9,ID:8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9,Pid:4762,ExitStatus:1,ExitedAt:2021-08-17 03:15:16.462898104 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.527830975Z" level=info msg="shim disconnected" id=8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.527985451Z" level=error msg="copy shim log" error="read /proc/self/fd/99: file already closed"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.736802220Z" level=info msg="CreateContainer within sandbox \"57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.755721634Z" level=info msg="CreateContainer within sandbox \"57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.756175636Z" level=info msg="StartContainer for \"8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.838907441Z" level=info msg="Finish piping stdout of container \"8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.839085400Z" level=info msg="Finish piping stderr of container \"8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.843025758Z" level=info msg="StartContainer for \"8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1\" returns successfully"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.843113027Z" level=info msg="TaskExit event &TaskExit{ContainerID:8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1,ID:8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1,Pid:4830,ExitStatus:1,ExitedAt:2021-08-17 03:15:16.840489677 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.878964363Z" level=info msg="shim disconnected" id=8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.879017614Z" level=error msg="copy shim log" error="read /proc/self/fd/99: file already closed"
	Aug 17 03:15:17 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:17.745141895Z" level=info msg="RemoveContainer for \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\""
	Aug 17 03:15:17 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:17.756194969Z" level=info msg="RemoveContainer for \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\" returns successfully"
	Aug 17 03:15:24 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:24.602405025Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 17 03:15:24 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:24.606450080Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Aug 17 03:15:24 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:24.608537632Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	
	* 
	* ==> coredns [555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/arm64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20210817030748-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-20210817030748-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=no-preload-20210817030748-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T03_14_54_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 03:14:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20210817030748-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 03:15:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 03:15:05 +0000   Tue, 17 Aug 2021 03:14:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 03:15:05 +0000   Tue, 17 Aug 2021 03:14:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 03:15:05 +0000   Tue, 17 Aug 2021 03:14:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 03:15:05 +0000   Tue, 17 Aug 2021 03:15:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    no-preload-20210817030748-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                3aabbd15-2269-48c0-a588-935b665ad168
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.22.0-rc.0
	  Kube-Proxy Version:         v1.22.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-255bv                                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     26s
	  kube-system                 etcd-no-preload-20210817030748-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         39s
	  kube-system                 kindnet-r9qg6                                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      26s
	  kube-system                 kube-apiserver-no-preload-20210817030748-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-controller-manager-no-preload-20210817030748-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-proxy-fd8hs                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-scheduler-no-preload-20210817030748-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 metrics-server-7c784ccb57-wct66                              100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (3%!)(MISSING)       0 (0%!)(MISSING)         23s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-85bpp                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-l6fk8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             520Mi (6%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  49s (x5 over 49s)  kubelet  Node no-preload-20210817030748-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x4 over 49s)  kubelet  Node no-preload-20210817030748-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x4 over 49s)  kubelet  Node no-preload-20210817030748-1554185 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s                kubelet  Node no-preload-20210817030748-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s                kubelet  Node no-preload-20210817030748-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s                kubelet  Node no-preload-20210817030748-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  33s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                27s                kubelet  Node no-preload-20210817030748-1554185 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> etcd [f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d] <==
	* raft2021/08/17 03:14:44 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 03:14:44.525125 W | auth: simple token is not cryptographically signed
	2021-08-17 03:14:44.549678 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-17 03:14:44.557828 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2021-08-17 03:14:44.590323 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 03:14:44.590455 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/17 03:14:44 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 03:14:44.590699 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-17 03:14:44.590734 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/17 03:14:45 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 03:14:45 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 03:14:45 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 03:14:45 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 03:14:45 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 03:14:45.226827 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 03:14:45.227787 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 03:14:45.227869 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 03:14:45.227932 I | etcdserver: published {Name:no-preload-20210817030748-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 03:14:45.228068 I | embed: ready to serve client requests
	2021-08-17 03:14:45.229431 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 03:14:45.238907 I | embed: ready to serve client requests
	2021-08-17 03:14:45.324168 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 03:15:08.706172 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 03:15:18.365267 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 03:15:28.364964 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  03:15:32 up 10:57,  0 users,  load average: 3.45, 2.17, 1.78
	Linux no-preload-20210817030748-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3] <==
	* I0817 03:14:51.167068       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0817 03:14:51.192382       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 03:14:51.194882       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 03:14:51.195307       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0817 03:14:51.865793       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 03:14:51.865818       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 03:14:51.896910       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0817 03:14:51.903213       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0817 03:14:51.903327       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 03:14:52.380908       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 03:14:52.411286       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 03:14:52.499569       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0817 03:14:52.500469       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 03:14:52.503958       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 03:14:53.120071       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 03:14:54.115578       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 03:14:54.236979       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 03:14:59.613535       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 03:15:06.637576       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0817 03:15:06.826987       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	E0817 03:15:09.513302       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	W0817 03:15:11.394581       1 handler_proxy.go:104] no RequestInfo found in the context
	E0817 03:15:11.394641       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 03:15:11.394648       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18] <==
	* I0817 03:15:09.585383       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0817 03:15:09.685204       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0817 03:15:09.704288       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:15:09.719311       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:15:09.719630       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:15:09.750416       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:15:09.780136       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:15:09.781016       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:15:09.781462       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:15:09.781480       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:15:09.816741       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:15:09.817015       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:15:09.817049       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:15:09.817063       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:15:09.840456       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:15:09.840793       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:15:09.840829       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:15:09.840844       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:15:09.857028       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:15:09.857075       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:15:09.863427       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:15:09.863473       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:15:09.938021       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-85bpp"
	I0817 03:15:09.950876       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-l6fk8"
	I0817 03:15:10.925644       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44] <==
	* I0817 03:15:07.758888       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 03:15:07.758939       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 03:15:07.758953       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0817 03:15:07.840730       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 03:15:07.840762       1 server_others.go:212] Using iptables Proxier.
	I0817 03:15:07.840773       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 03:15:07.840789       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 03:15:07.841096       1 server.go:649] Version: v1.22.0-rc.0
	I0817 03:15:07.857151       1 config.go:315] Starting service config controller
	I0817 03:15:07.857163       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 03:15:07.857212       1 config.go:224] Starting endpoint slice config controller
	I0817 03:15:07.857215       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0817 03:15:07.862062       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210817030748-1554185.169bf994d49356c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03ee84ef227c2c9, ext:583304160, loc:(*time.Location)(0x2698ec0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210817030748-1554185", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"",
Name:"no-preload-20210817030748-1554185", UID:"no-preload-20210817030748-1554185", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210817030748-1554185.169bf994d49356c7" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0817 03:15:07.958864       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 03:15:07.959022       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3] <==
	* W0817 03:14:51.031663       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 03:14:51.130483       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0817 03:14:51.130962       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0817 03:14:51.134011       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0817 03:14:51.136501       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0817 03:14:51.136529       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0817 03:14:51.151652       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 03:14:51.151786       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 03:14:51.151918       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 03:14:51.152020       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 03:14:51.152146       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 03:14:51.152251       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 03:14:51.152345       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 03:14:51.152470       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 03:14:51.152559       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 03:14:51.152635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 03:14:51.152786       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 03:14:51.152874       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 03:14:51.152898       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 03:14:51.159050       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 03:14:52.117892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 03:14:52.127031       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 03:14:52.181725       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 03:14:52.181950       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0817 03:14:55.337154       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 03:09:46 UTC, end at Tue 2021-08-17 03:15:32 UTC. --
	Aug 17 03:15:14 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:14.201950    3297 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff509214-3ed3-48e0-9233-0d56cc3583e8-config-volume\") on node \"no-preload-20210817030748-1554185\" DevicePath \"\""
	Aug 17 03:15:14 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:14.727101    3297 scope.go:110] "RemoveContainer" containerID="3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99"
	Aug 17 03:15:14 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:14.735192    3297 scope.go:110] "RemoveContainer" containerID="3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99"
	Aug 17 03:15:14 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:14.735840    3297 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99\": not found" containerID="3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99"
	Aug 17 03:15:14 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:14.735878    3297 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99} err="failed to get container status \"3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99\": not found"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:16.732898    3297 scope.go:110] "RemoveContainer" containerID="8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9"
	Aug 17 03:15:17 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:17.606324    3297 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ff509214-3ed3-48e0-9233-0d56cc3583e8 path="/var/lib/kubelet/pods/ff509214-3ed3-48e0-9233-0d56cc3583e8/volumes"
	Aug 17 03:15:17 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:17.743496    3297 scope.go:110] "RemoveContainer" containerID="8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9"
	Aug 17 03:15:17 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:17.743839    3297 scope.go:110] "RemoveContainer" containerID="8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1"
	Aug 17 03:15:17 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:17.744116    3297 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-85bpp_kubernetes-dashboard(dff4da5a-2bfd-4949-9ae0-1ac5b7d02599)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-85bpp" podUID=dff4da5a-2bfd-4949-9ae0-1ac5b7d02599
	Aug 17 03:15:17 no-preload-20210817030748-1554185 kubelet[3297]: W0817 03:15:17.937991    3297 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/poddff4da5a-2bfd-4949-9ae0-1ac5b7d02599/8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9 WatchSource:0}: container "8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9" in namespace "k8s.io": not found
	Aug 17 03:15:18 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:18.746507    3297 scope.go:110] "RemoveContainer" containerID="8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1"
	Aug 17 03:15:18 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:18.746796    3297 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-85bpp_kubernetes-dashboard(dff4da5a-2bfd-4949-9ae0-1ac5b7d02599)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-85bpp" podUID=dff4da5a-2bfd-4949-9ae0-1ac5b7d02599
	Aug 17 03:15:19 no-preload-20210817030748-1554185 kubelet[3297]: W0817 03:15:19.444121    3297 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/poddff4da5a-2bfd-4949-9ae0-1ac5b7d02599/8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1 WatchSource:0}: task 8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1 not found: not found
	Aug 17 03:15:19 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:19.956731    3297 scope.go:110] "RemoveContainer" containerID="8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1"
	Aug 17 03:15:19 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:19.957035    3297 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-85bpp_kubernetes-dashboard(dff4da5a-2bfd-4949-9ae0-1ac5b7d02599)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-85bpp" podUID=dff4da5a-2bfd-4949-9ae0-1ac5b7d02599
	Aug 17 03:15:20 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:20.607398    3297 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod7a0dec2f-5605-4e68-8128-d88da36ed6dd\": RecentStats: unable to find data in memory cache]"
	Aug 17 03:15:24 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:24.608709    3297 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 03:15:24 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:24.608753    3297 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 03:15:24 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:24.608857    3297 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nqc4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-wct66_kube-system(ffb75899-dc46-4aaa-945c-7b87ae2e020f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
	Aug 17 03:15:24 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:24.608900    3297 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-7c784ccb57-wct66" podUID=ffb75899-dc46-4aaa-945c-7b87ae2e020f
	Aug 17 03:15:29 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:29.437481    3297 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 17 03:15:29 no-preload-20210817030748-1554185 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 17 03:15:29 no-preload-20210817030748-1554185 systemd[1]: kubelet.service: Succeeded.
	Aug 17 03:15:29 no-preload-20210817030748-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d] <==
	* 2021/08/17 03:15:12 Starting overwatch
	2021/08/17 03:15:12 Using namespace: kubernetes-dashboard
	2021/08/17 03:15:12 Using in-cluster config to connect to apiserver
	2021/08/17 03:15:12 Using secret token for csrf signing
	2021/08/17 03:15:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/17 03:15:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/17 03:15:12 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/17 03:15:12 Generating JWE encryption key
	2021/08/17 03:15:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/17 03:15:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/17 03:15:12 Initializing JWE encryption key from synchronized object
	2021/08/17 03:15:12 Creating in-cluster Sidecar client
	2021/08/17 03:15:12 Serving insecurely on HTTP port: 9090
	2021/08/17 03:15:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe] <==
	* I0817 03:15:10.404753       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 03:15:10.425203       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 03:15:10.425420       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 03:15:10.432298       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 03:15:10.432420       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20210817030748-1554185_a1d6c892-8e55-4645-94a1-949ba8f71cee!
	I0817 03:15:10.433196       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4650a313-4fc6-4e41-a1e6-959a74610025", APIVersion:"v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20210817030748-1554185_a1d6c892-8e55-4645-94a1-949ba8f71cee became leader
	I0817 03:15:10.533219       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20210817030748-1554185_a1d6c892-8e55-4645-94a1-949ba8f71cee!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-20210817030748-1554185 -n no-preload-20210817030748-1554185
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-20210817030748-1554185 -n no-preload-20210817030748-1554185: exit status 2 (393.506564ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20210817030748-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-wct66
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20210817030748-1554185 describe pod metrics-server-7c784ccb57-wct66
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20210817030748-1554185 describe pod metrics-server-7c784ccb57-wct66: exit status 1 (104.963104ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-wct66" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20210817030748-1554185 describe pod metrics-server-7c784ccb57-wct66: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20210817030748-1554185
helpers_test.go:236: (dbg) docker inspect no-preload-20210817030748-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80cc9e585a53c9332b6b3abef31685390274cfaaeff6c6724d804bcae6849e4d",
	        "Created": "2021-08-17T03:07:49.718412699Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1735060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T03:09:46.379410525Z",
	            "FinishedAt": "2021-08-17T03:09:45.091566147Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/80cc9e585a53c9332b6b3abef31685390274cfaaeff6c6724d804bcae6849e4d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80cc9e585a53c9332b6b3abef31685390274cfaaeff6c6724d804bcae6849e4d/hostname",
	        "HostsPath": "/var/lib/docker/containers/80cc9e585a53c9332b6b3abef31685390274cfaaeff6c6724d804bcae6849e4d/hosts",
	        "LogPath": "/var/lib/docker/containers/80cc9e585a53c9332b6b3abef31685390274cfaaeff6c6724d804bcae6849e4d/80cc9e585a53c9332b6b3abef31685390274cfaaeff6c6724d804bcae6849e4d-json.log",
	        "Name": "/no-preload-20210817030748-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210817030748-1554185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20210817030748-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/15e895b9138d0bdfa0e7ee3396e4f20c5d39d0a934f410d57fb83618afc25d80-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/15e895b9138d0bdfa0e7ee3396e4f20c5d39d0a934f410d57fb83618afc25d80/merged",
	                "UpperDir": "/var/lib/docker/overlay2/15e895b9138d0bdfa0e7ee3396e4f20c5d39d0a934f410d57fb83618afc25d80/diff",
	                "WorkDir": "/var/lib/docker/overlay2/15e895b9138d0bdfa0e7ee3396e4f20c5d39d0a934f410d57fb83618afc25d80/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210817030748-1554185",
	                "Source": "/var/lib/docker/volumes/no-preload-20210817030748-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210817030748-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210817030748-1554185",
	                "name.minikube.sigs.k8s.io": "no-preload-20210817030748-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a441a4132ef69378491cb7e753ee36b706e2c7f656b9cb2a62b52763b1c4b562",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50493"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50492"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50489"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50491"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50490"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a441a4132ef6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20210817030748-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "80cc9e585a53",
	                        "no-preload-20210817030748-1554185"
	                    ],
	                    "NetworkID": "cd4aea319b4395fa95af3da2082bfc6147b0843f2f50f1029b55cb467a861890",
	                    "EndpointID": "eff2a9a3f8b00faf610ca859723eaa49f986a3775044af2722829183e8342750",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210817030748-1554185 -n no-preload-20210817030748-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210817030748-1554185 -n no-preload-20210817030748-1554185: exit status 2 (302.353003ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-20210817030748-1554185 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:253: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                      Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:02 UTC | Tue, 17 Aug 2021 02:51:03 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:03 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:51:24 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:51:24 UTC | Tue, 17 Aug 2021 02:57:04 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                   |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                   |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:57:15 UTC | Tue, 17 Aug 2021 02:57:15 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:05 UTC | Tue, 17 Aug 2021 02:59:08 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	| delete  | -p                                                | default-k8s-different-port-20210817024852-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:08 UTC | Tue, 17 Aug 2021 02:59:08 UTC |
	|         | default-k8s-different-port-20210817024852-1554185 |                                                   |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 02:59:08 UTC | Tue, 17 Aug 2021 03:01:12 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:21 UTC | Tue, 17 Aug 2021 03:01:22 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:22 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:07:27 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:38 UTC | Tue, 17 Aug 2021 03:07:38 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	| -p      | embed-certs-20210817025908-1554185                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:40 UTC | Tue, 17 Aug 2021 03:07:41 UTC |
	|         | logs -n 25                                        |                                                   |         |         |                               |                               |
	| -p      | embed-certs-20210817025908-1554185                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:42 UTC | Tue, 17 Aug 2021 03:07:43 UTC |
	|         | logs -n 25                                        |                                                   |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:44 UTC | Tue, 17 Aug 2021 03:07:47 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210817025908-1554185                | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:47 UTC | Tue, 17 Aug 2021 03:07:48 UTC |
	|         | embed-certs-20210817025908-1554185                |                                                   |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210817030748-1554185      | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:48 UTC | Tue, 17 Aug 2021 03:07:48 UTC |
	|         | disable-driver-mounts-20210817030748-1554185      |                                                   |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:48 UTC | Tue, 17 Aug 2021 03:09:14 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:24 UTC | Tue, 17 Aug 2021 03:09:24 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:25 UTC | Tue, 17 Aug 2021 03:09:45 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:45 UTC | Tue, 17 Aug 2021 03:09:45 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:45 UTC | Tue, 17 Aug 2021 03:15:18 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:28 UTC | Tue, 17 Aug 2021 03:15:29 UTC |
	|         | no-preload-20210817030748-1554185                 |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	| -p      | no-preload-20210817030748-1554185                 | no-preload-20210817030748-1554185                 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:31 UTC | Tue, 17 Aug 2021 03:15:32 UTC |
	|         | logs -n 25                                        |                                                   |         |         |                               |                               |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 03:09:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 03:09:45.595717 1734845 out.go:298] Setting OutFile to fd 1 ...
	I0817 03:09:45.595882 1734845 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:09:45.595892 1734845 out.go:311] Setting ErrFile to fd 2...
	I0817 03:09:45.595896 1734845 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:09:45.596029 1734845 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 03:09:45.596261 1734845 out.go:305] Setting JSON to false
	I0817 03:09:45.597078 1734845 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39124,"bootTime":1629130662,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 03:09:45.597149 1734845 start.go:121] virtualization:  
	I0817 03:09:45.599691 1734845 out.go:177] * [no-preload-20210817030748-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 03:09:45.602314 1734845 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 03:09:45.601227 1734845 notify.go:169] Checking for updates...
	I0817 03:09:45.604506 1734845 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:09:45.606220 1734845 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 03:09:45.607782 1734845 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 03:09:45.608182 1734845 config.go:177] Loaded profile config "no-preload-20210817030748-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:09:45.608622 1734845 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 03:09:45.645796 1734845 docker.go:132] docker version: linux-20.10.8
	I0817 03:09:45.645869 1734845 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:09:45.761101 1734845 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:09:45.693503398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 03:09:45.761245 1734845 docker.go:244] overlay module found
	I0817 03:09:45.763624 1734845 out.go:177] * Using the docker driver based on existing profile
	I0817 03:09:45.763643 1734845 start.go:278] selected driver: docker
	I0817 03:09:45.763649 1734845 start.go:751] validating driver "docker" against &{Name:no-preload-20210817030748-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817030748-1554185 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHo
stTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:09:45.763770 1734845 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 03:09:45.763808 1734845 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:09:45.763823 1734845 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:09:45.765334 1734845 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:09:45.765622 1734845 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:09:45.879144 1734845 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:09:45.801060075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	W0817 03:09:45.879289 1734845 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:09:45.879303 1734845 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:09:45.881509 1734845 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:09:45.881598 1734845 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 03:09:45.881618 1734845 cni.go:93] Creating CNI manager for ""
	I0817 03:09:45.881625 1734845 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:09:45.881634 1734845 start_flags.go:277] config:
	{Name:no-preload-20210817030748-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817030748-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Multi
NodeRequested:false ExtraDisks:0}
	I0817 03:09:45.883980 1734845 out.go:177] * Starting control plane node no-preload-20210817030748-1554185 in cluster no-preload-20210817030748-1554185
	I0817 03:09:45.884009 1734845 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 03:09:45.885867 1734845 out.go:177] * Pulling base image ...
	I0817 03:09:45.885887 1734845 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0817 03:09:45.886004 1734845 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/config.json ...
	I0817 03:09:45.886270 1734845 cache.go:108] acquiring lock: {Name:mk632f6e0db9416813fd07fccbb58335b8e59d21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886405 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0817 03:09:45.886419 1734845 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 157.077µs
	I0817 03:09:45.886429 1734845 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0817 03:09:45.886443 1734845 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 03:09:45.886608 1734845 cache.go:108] acquiring lock: {Name:mk4fc0e92492b47d614457da59bc6dab952f8b05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886684 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 exists
	I0817 03:09:45.886696 1734845 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1" took 92.225µs
	I0817 03:09:45.886705 1734845 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.4.1 succeeded
	I0817 03:09:45.886719 1734845 cache.go:108] acquiring lock: {Name:mk6dba5734dfeaf6d9d4511e98f054cac0439cfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886771 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 exists
	I0817 03:09:45.886780 1734845 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0" took 62.432µs
	I0817 03:09:45.886790 1734845 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.0-rc.0 succeeded
	I0817 03:09:45.886801 1734845 cache.go:108] acquiring lock: {Name:mkacaa9736949fc5d0494bb1d5c3531771bb3ea8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886855 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 exists
	I0817 03:09:45.886864 1734845 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0" took 64.738µs
	I0817 03:09:45.886873 1734845 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.0-rc.0 succeeded
	I0817 03:09:45.886885 1734845 cache.go:108] acquiring lock: {Name:mkf7cd9af6d882fda3a954c4eb39d82dc77cd0d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886917 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 exists
	I0817 03:09:45.886924 1734845 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0" took 41.107µs
	I0817 03:09:45.886932 1734845 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.0-rc.0 succeeded
	I0817 03:09:45.886942 1734845 cache.go:108] acquiring lock: {Name:mkeec948dbb922c159c4fc1af8656d60fa14d5a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.886975 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 exists
	I0817 03:09:45.886984 1734845 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0" took 42.682µs
	I0817 03:09:45.886994 1734845 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.0-rc.0 succeeded
	I0817 03:09:45.887005 1734845 cache.go:108] acquiring lock: {Name:mkb04986d0796ebd5c4c0669e3d06018c5856bea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.887038 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 exists
	I0817 03:09:45.887045 1734845 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0" took 41.862µs
	I0817 03:09:45.887053 1734845 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.0 succeeded
	I0817 03:09:45.887063 1734845 cache.go:108] acquiring lock: {Name:mk79883006bb65c2c14816b6b80621971bab0e63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.887095 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 exists
	I0817 03:09:45.887102 1734845 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3" took 40.5µs
	I0817 03:09:45.887110 1734845 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-3 succeeded
	I0817 03:09:45.887121 1734845 cache.go:108] acquiring lock: {Name:mk9f3113ef4c19ec91ec377b2f94212c471844e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.887153 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0817 03:09:45.887160 1734845 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 40.606µs
	I0817 03:09:45.887170 1734845 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0817 03:09:45.887180 1734845 cache.go:108] acquiring lock: {Name:mk17550e76c320cd5e7ed26cfb8c625219e409db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.887221 1734845 cache.go:116] /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0817 03:09:45.887229 1734845 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 49.772µs
	I0817 03:09:45.887241 1734845 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0817 03:09:45.887246 1734845 cache.go:88] Successfully saved all images to host disk.
	I0817 03:09:45.961928 1734845 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 03:09:45.961949 1734845 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 03:09:45.961966 1734845 cache.go:205] Successfully downloaded all kic artifacts
	I0817 03:09:45.962001 1734845 start.go:313] acquiring machines lock for no-preload-20210817030748-1554185: {Name:mkb71c7d4561b567efc566d76b68a021481de41c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:09:45.962079 1734845 start.go:317] acquired machines lock for "no-preload-20210817030748-1554185" in 63.121µs
	I0817 03:09:45.962097 1734845 start.go:93] Skipping create...Using existing machine configuration
	I0817 03:09:45.962102 1734845 fix.go:55] fixHost starting: 
	I0817 03:09:45.962404 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:09:46.008511 1734845 fix.go:108] recreateIfNeeded on no-preload-20210817030748-1554185: state=Stopped err=<nil>
	W0817 03:09:46.008543 1734845 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 03:09:46.010730 1734845 out.go:177] * Restarting existing docker container for "no-preload-20210817030748-1554185" ...
	I0817 03:09:46.010790 1734845 cli_runner.go:115] Run: docker start no-preload-20210817030748-1554185
	I0817 03:09:46.387790 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:09:46.422718 1734845 kic.go:420] container "no-preload-20210817030748-1554185" state is running.
	I0817 03:09:46.423270 1734845 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210817030748-1554185
	I0817 03:09:46.455024 1734845 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/config.json ...
	I0817 03:09:46.455190 1734845 machine.go:88] provisioning docker machine ...
	I0817 03:09:46.455203 1734845 ubuntu.go:169] provisioning hostname "no-preload-20210817030748-1554185"
	I0817 03:09:46.455245 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:46.490850 1734845 main.go:130] libmachine: Using SSH client type: native
	I0817 03:09:46.491023 1734845 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50493 <nil> <nil>}
	I0817 03:09:46.491052 1734845 main.go:130] libmachine: About to run SSH command:
	sudo hostname no-preload-20210817030748-1554185 && echo "no-preload-20210817030748-1554185" | sudo tee /etc/hostname
	I0817 03:09:46.491652 1734845 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52816->127.0.0.1:50493: read: connection reset by peer
	I0817 03:09:49.617716 1734845 main.go:130] libmachine: SSH cmd err, output: <nil>: no-preload-20210817030748-1554185
	
	I0817 03:09:49.617784 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:49.649149 1734845 main.go:130] libmachine: Using SSH client type: native
	I0817 03:09:49.649317 1734845 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50493 <nil> <nil>}
	I0817 03:09:49.649345 1734845 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210817030748-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210817030748-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210817030748-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 03:09:49.761988 1734845 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 03:09:49.762013 1734845 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 03:09:49.762044 1734845 ubuntu.go:177] setting up certificates
	I0817 03:09:49.762053 1734845 provision.go:83] configureAuth start
	I0817 03:09:49.762113 1734845 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210817030748-1554185
	I0817 03:09:49.805225 1734845 provision.go:138] copyHostCerts
	I0817 03:09:49.805278 1734845 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 03:09:49.805285 1734845 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 03:09:49.805341 1734845 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 03:09:49.805411 1734845 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 03:09:49.805418 1734845 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 03:09:49.805438 1734845 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 03:09:49.805482 1734845 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 03:09:49.805486 1734845 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 03:09:49.805505 1734845 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 03:09:49.805539 1734845 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.no-preload-20210817030748-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210817030748-1554185]
	I0817 03:09:50.088826 1734845 provision.go:172] copyRemoteCerts
	I0817 03:09:50.088904 1734845 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 03:09:50.088957 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.120178 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.204693 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 03:09:50.219811 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0817 03:09:50.234847 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 03:09:50.249568 1734845 provision.go:86] duration metric: configureAuth took 487.504363ms
	I0817 03:09:50.249586 1734845 ubuntu.go:193] setting minikube options for container-runtime
	I0817 03:09:50.249747 1734845 config.go:177] Loaded profile config "no-preload-20210817030748-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:09:50.249761 1734845 machine.go:91] provisioned docker machine in 3.794563903s
	I0817 03:09:50.249768 1734845 start.go:267] post-start starting for "no-preload-20210817030748-1554185" (driver="docker")
	I0817 03:09:50.249775 1734845 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 03:09:50.249819 1734845 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 03:09:50.249855 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.280274 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.364600 1734845 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 03:09:50.366851 1734845 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 03:09:50.366874 1734845 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 03:09:50.366885 1734845 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 03:09:50.366893 1734845 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 03:09:50.366902 1734845 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 03:09:50.366958 1734845 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 03:09:50.367038 1734845 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 03:09:50.367128 1734845 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 03:09:50.372464 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:09:50.386603 1734845 start.go:270] post-start completed in 136.82351ms
	I0817 03:09:50.386665 1734845 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 03:09:50.386708 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.416929 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.499112 1734845 fix.go:57] fixHost completed within 4.537005735s
	I0817 03:09:50.499156 1734845 start.go:80] releasing machines lock for "no-preload-20210817030748-1554185", held for 4.537067921s
	I0817 03:09:50.499234 1734845 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210817030748-1554185
	I0817 03:09:50.528911 1734845 ssh_runner.go:149] Run: systemctl --version
	I0817 03:09:50.528958 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.529174 1734845 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 03:09:50.529224 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:09:50.570479 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.581923 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:09:50.789210 1734845 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 03:09:50.807188 1734845 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 03:09:50.817365 1734845 docker.go:153] disabling docker service ...
	I0817 03:09:50.817403 1734845 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 03:09:50.828441 1734845 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 03:09:50.838006 1734845 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 03:09:50.937592 1734845 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 03:09:51.044840 1734845 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 03:09:51.053358 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 03:09:51.064501 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 03:09:51.075958 1734845 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 03:09:51.081433 1734845 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 03:09:51.086713 1734845 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 03:09:51.172289 1734845 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 03:09:51.291561 1734845 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 03:09:51.291621 1734845 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 03:09:51.295154 1734845 start.go:413] Will wait 60s for crictl version
	I0817 03:09:51.295201 1734845 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:09:51.318238 1734845 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T03:09:51Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 03:10:02.365028 1734845 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:10:02.391854 1734845 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 03:10:02.391910 1734845 ssh_runner.go:149] Run: containerd --version
	I0817 03:10:02.413907 1734845 ssh_runner.go:149] Run: containerd --version
	I0817 03:10:02.437726 1734845 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0817 03:10:02.437797 1734845 cli_runner.go:115] Run: docker network inspect no-preload-20210817030748-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 03:10:02.468777 1734845 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 03:10:02.471802 1734845 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:10:02.480305 1734845 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0817 03:10:02.480343 1734845 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:10:02.504624 1734845 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:10:02.504642 1734845 cache_images.go:74] Images are preloaded, skipping loading
	I0817 03:10:02.504681 1734845 ssh_runner.go:149] Run: sudo crictl info
	I0817 03:10:02.525951 1734845 cni.go:93] Creating CNI manager for ""
	I0817 03:10:02.525972 1734845 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:10:02.525982 1734845 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 03:10:02.525994 1734845 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210817030748-1554185 NodeName:no-preload-20210817030748-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupf
s ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 03:10:02.526120 1734845 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20210817030748-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 03:10:02.526201 1734845 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20210817030748-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817030748-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 03:10:02.526251 1734845 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0817 03:10:02.532165 1734845 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 03:10:02.532209 1734845 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 03:10:02.538076 1734845 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (583 bytes)
	I0817 03:10:02.549244 1734845 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 03:10:02.559733 1734845 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2088 bytes)
	I0817 03:10:02.570496 1734845 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 03:10:02.572893 1734845 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:10:02.580287 1734845 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185 for IP: 192.168.49.2
	I0817 03:10:02.580356 1734845 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 03:10:02.580376 1734845 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 03:10:02.580418 1734845 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.key
	I0817 03:10:02.580452 1734845 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/apiserver.key.dd3b5fb2
	I0817 03:10:02.580472 1734845 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/proxy-client.key
	I0817 03:10:02.580563 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 03:10:02.580621 1734845 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 03:10:02.580635 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 03:10:02.580658 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 03:10:02.580690 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 03:10:02.580716 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 03:10:02.580762 1734845 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:10:02.581815 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 03:10:02.596113 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 03:10:02.610196 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 03:10:02.624534 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 03:10:02.638602 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 03:10:02.652883 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 03:10:02.667765 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 03:10:02.682078 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 03:10:02.696076 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 03:10:02.710132 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 03:10:02.724104 1734845 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 03:10:02.741265 1734845 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 03:10:02.753135 1734845 ssh_runner.go:149] Run: openssl version
	I0817 03:10:02.758662 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 03:10:02.766005 1734845 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 03:10:02.769984 1734845 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 03:10:02.770058 1734845 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 03:10:02.774787 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 03:10:02.782462 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 03:10:02.788967 1734845 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 03:10:02.791943 1734845 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 03:10:02.792016 1734845 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 03:10:02.796538 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 03:10:02.803506 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 03:10:02.810234 1734845 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:10:02.813269 1734845 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:10:02.813341 1734845 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:10:02.817670 1734845 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 03:10:02.826139 1734845 kubeadm.go:390] StartCluster: {Name:no-preload-20210817030748-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817030748-1554185 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Sched
uledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:10:02.826272 1734845 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 03:10:02.829083 1734845 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:10:02.860893 1734845 cri.go:76] found id: "5fd95b123978dca4b6ab24c1dd9e6d59c36548c5b94e6654ec27ef56af4fe88d"
	I0817 03:10:02.860910 1734845 cri.go:76] found id: "9c060387fd4d3d313109509ecd44d3a743c106457f9d93241bffab2e2b93f919"
	I0817 03:10:02.860916 1734845 cri.go:76] found id: "36c454d274c3ef067156d2fbbd72f2376c54830a080725345eefa31afc181bd5"
	I0817 03:10:02.860920 1734845 cri.go:76] found id: "d54ec4ef3496e2491faab4264d90e872e94c939c42dd24df1760f57387e81499"
	I0817 03:10:02.860925 1734845 cri.go:76] found id: "cb3ffc6bc33b5a46ec1025b4e24ecb9fc89717bd91aada6c0e170078704aaf60"
	I0817 03:10:02.860931 1734845 cri.go:76] found id: "761d25293d436e790a2b65ed62a103031486ccf71492f2c69cc59f70713dea63"
	I0817 03:10:02.860938 1734845 cri.go:76] found id: "55af062ccd49e046278cc903ed18f62c05456d1ce2e71712fedbb3d9785dd68f"
	I0817 03:10:02.860943 1734845 cri.go:76] found id: "4432107cb17de93765fa65b6065434f94d55510e3066cc7a7919426783139edc"
	I0817 03:10:02.860947 1734845 cri.go:76] found id: ""
	I0817 03:10:02.860986 1734845 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:10:02.876381 1734845 cri.go:103] JSON = null
	W0817 03:10:02.876425 1734845 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0817 03:10:02.876467 1734845 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 03:10:02.883651 1734845 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 03:10:02.883665 1734845 kubeadm.go:600] restartCluster start
	I0817 03:10:02.883716 1734845 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 03:10:02.891213 1734845 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:02.892046 1734845 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210817030748-1554185" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:10:02.892308 1734845 kubeconfig.go:128] "no-preload-20210817030748-1554185" context is missing from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0817 03:10:02.892839 1734845 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:10:02.895523 1734845 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 03:10:02.902064 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:02.902116 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:02.911081 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.111403 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.111554 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.120856 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.312115 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.312186 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.321126 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.511190 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.511278 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.520092 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.711390 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.711449 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.720190 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:03.911517 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:03.911590 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:03.927188 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.111420 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.111475 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.120528 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.311784 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.311847 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.320515 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.511725 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.511806 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.520331 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.711571 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.711653 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.720228 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:04.911498 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:04.911603 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:04.922093 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.111424 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.111494 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.120830 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.312038 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.312096 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.320765 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.512027 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.512071 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.520648 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.712032 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.712087 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.720826 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.911883 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.911985 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.923085 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.923128 1734845 api_server.go:164] Checking apiserver status ...
	I0817 03:10:05.923192 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:10:05.932466 1734845 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:05.932514 1734845 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0817 03:10:05.932533 1734845 kubeadm.go:1032] stopping kube-system containers ...
	I0817 03:10:05.932552 1734845 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 03:10:05.932619 1734845 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:10:05.959487 1734845 cri.go:76] found id: "5fd95b123978dca4b6ab24c1dd9e6d59c36548c5b94e6654ec27ef56af4fe88d"
	I0817 03:10:05.959505 1734845 cri.go:76] found id: "9c060387fd4d3d313109509ecd44d3a743c106457f9d93241bffab2e2b93f919"
	I0817 03:10:05.959510 1734845 cri.go:76] found id: "36c454d274c3ef067156d2fbbd72f2376c54830a080725345eefa31afc181bd5"
	I0817 03:10:05.959517 1734845 cri.go:76] found id: "d54ec4ef3496e2491faab4264d90e872e94c939c42dd24df1760f57387e81499"
	I0817 03:10:05.959521 1734845 cri.go:76] found id: "cb3ffc6bc33b5a46ec1025b4e24ecb9fc89717bd91aada6c0e170078704aaf60"
	I0817 03:10:05.959526 1734845 cri.go:76] found id: "761d25293d436e790a2b65ed62a103031486ccf71492f2c69cc59f70713dea63"
	I0817 03:10:05.959534 1734845 cri.go:76] found id: "55af062ccd49e046278cc903ed18f62c05456d1ce2e71712fedbb3d9785dd68f"
	I0817 03:10:05.959539 1734845 cri.go:76] found id: "4432107cb17de93765fa65b6065434f94d55510e3066cc7a7919426783139edc"
	I0817 03:10:05.959548 1734845 cri.go:76] found id: ""
	I0817 03:10:05.959553 1734845 cri.go:221] Stopping containers: [5fd95b123978dca4b6ab24c1dd9e6d59c36548c5b94e6654ec27ef56af4fe88d 9c060387fd4d3d313109509ecd44d3a743c106457f9d93241bffab2e2b93f919 36c454d274c3ef067156d2fbbd72f2376c54830a080725345eefa31afc181bd5 d54ec4ef3496e2491faab4264d90e872e94c939c42dd24df1760f57387e81499 cb3ffc6bc33b5a46ec1025b4e24ecb9fc89717bd91aada6c0e170078704aaf60 761d25293d436e790a2b65ed62a103031486ccf71492f2c69cc59f70713dea63 55af062ccd49e046278cc903ed18f62c05456d1ce2e71712fedbb3d9785dd68f 4432107cb17de93765fa65b6065434f94d55510e3066cc7a7919426783139edc]
	I0817 03:10:05.959598 1734845 ssh_runner.go:149] Run: which crictl
	I0817 03:10:05.962096 1734845 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 5fd95b123978dca4b6ab24c1dd9e6d59c36548c5b94e6654ec27ef56af4fe88d 9c060387fd4d3d313109509ecd44d3a743c106457f9d93241bffab2e2b93f919 36c454d274c3ef067156d2fbbd72f2376c54830a080725345eefa31afc181bd5 d54ec4ef3496e2491faab4264d90e872e94c939c42dd24df1760f57387e81499 cb3ffc6bc33b5a46ec1025b4e24ecb9fc89717bd91aada6c0e170078704aaf60 761d25293d436e790a2b65ed62a103031486ccf71492f2c69cc59f70713dea63 55af062ccd49e046278cc903ed18f62c05456d1ce2e71712fedbb3d9785dd68f 4432107cb17de93765fa65b6065434f94d55510e3066cc7a7919426783139edc
	I0817 03:10:05.985016 1734845 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 03:10:05.994165 1734845 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 03:10:06.000009 1734845 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 17 03:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 17 03:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2071 Aug 17 03:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 17 03:08 /etc/kubernetes/scheduler.conf
	
	I0817 03:10:06.000052 1734845 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0817 03:10:06.005722 1734845 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0817 03:10:06.011325 1734845 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0817 03:10:06.016859 1734845 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:06.016897 1734845 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 03:10:06.022694 1734845 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0817 03:10:06.028263 1734845 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:10:06.028305 1734845 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 03:10:06.034042 1734845 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 03:10:06.039704 1734845 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 03:10:06.039723 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:06.082382 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:08.615294 1734845 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.532882729s)
	I0817 03:10:08.615317 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:08.767910 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:08.915113 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:08.981033 1734845 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:10:08.981090 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:09.491936 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:09.991932 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:10.492229 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:10.991690 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:11.491572 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:11.991560 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:12.491549 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:12.992465 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:13.491498 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:13.991910 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:14.492177 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:14.991942 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:15.492364 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:10:15.511549 1734845 api_server.go:70] duration metric: took 6.530524968s to wait for apiserver process to appear ...
	I0817 03:10:15.511565 1734845 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:10:15.511573 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:20.514891 1734845 api_server.go:255] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 03:10:21.015169 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:21.565807 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 03:10:21.565826 1734845 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 03:10:22.015051 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:22.041924 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:10:22.041942 1734845 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:10:22.515122 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:22.524921 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:10:22.524982 1734845 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:10:23.015376 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:10:23.031209 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:10:23.058291 1734845 api_server.go:139] control plane version: v1.22.0-rc.0
	I0817 03:10:23.058308 1734845 api_server.go:129] duration metric: took 7.546737318s to wait for apiserver health ...
	I0817 03:10:23.058317 1734845 cni.go:93] Creating CNI manager for ""
	I0817 03:10:23.058324 1734845 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:10:23.061243 1734845 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 03:10:23.061294 1734845 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 03:10:23.065558 1734845 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0817 03:10:23.065571 1734845 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 03:10:23.111700 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 03:10:23.541478 1734845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:10:23.554265 1734845 system_pods.go:59] 9 kube-system pods found
	I0817 03:10:23.554329 1734845 system_pods.go:61] "coredns-78fcd69978-nxgmv" [e5cfb032-8c57-472c-8433-778c79a640b2] Running
	I0817 03:10:23.554348 1734845 system_pods.go:61] "etcd-no-preload-20210817030748-1554185" [a8887420-4d93-40e6-98dc-1983e6a39b00] Running
	I0817 03:10:23.554366 1734845 system_pods.go:61] "kindnet-w55nn" [b64f1d5a-7c2e-44a2-bb39-0461eb1fc34f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 03:10:23.554381 1734845 system_pods.go:61] "kube-apiserver-no-preload-20210817030748-1554185" [e4ac61de-aae2-40be-8dd1-8de97f9fbbf0] Running
	I0817 03:10:23.554399 1734845 system_pods.go:61] "kube-controller-manager-no-preload-20210817030748-1554185" [80d8992e-cee6-4d6c-9a3c-02efe38509c3] Running
	I0817 03:10:23.554425 1734845 system_pods.go:61] "kube-proxy-2wcnd" [98d1ffc4-ef5d-4686-85c5-e6c7c706a5d0] Running
	I0817 03:10:23.554446 1734845 system_pods.go:61] "kube-scheduler-no-preload-20210817030748-1554185" [da680647-558b-4c7f-9ea4-0493359ec794] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 03:10:23.554463 1734845 system_pods.go:61] "metrics-server-7c784ccb57-g4znl" [f28ee3e1-229f-43f7-a493-4ad334a03e12] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:10:23.554479 1734845 system_pods.go:61] "storage-provisioner" [c8fcde2f-327e-462a-8883-25cd16bd9a0f] Running
	I0817 03:10:23.554495 1734845 system_pods.go:74] duration metric: took 13.002435ms to wait for pod list to return data ...
	I0817 03:10:23.554512 1734845 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:10:23.558744 1734845 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:10:23.558803 1734845 node_conditions.go:123] node cpu capacity is 2
	I0817 03:10:23.558880 1734845 node_conditions.go:105] duration metric: took 4.351282ms to run NodePressure ...
	I0817 03:10:23.558910 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:10:23.890429 1734845 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0817 03:10:23.895047 1734845 kubeadm.go:746] kubelet initialised
	I0817 03:10:23.895068 1734845 kubeadm.go:747] duration metric: took 4.62177ms waiting for restarted kubelet to initialise ...
	I0817 03:10:23.895075 1734845 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:10:23.901002 1734845 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:25.915651 1734845 pod_ready.go:102] pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:28.415696 1734845 pod_ready.go:102] pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:30.914925 1734845 pod_ready.go:92] pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:30.914950 1734845 pod_ready.go:81] duration metric: took 7.013913856s waiting for pod "coredns-78fcd69978-nxgmv" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:30.914960 1734845 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:31.424098 1734845 pod_ready.go:92] pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:31.424117 1734845 pod_ready.go:81] duration metric: took 509.148838ms waiting for pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:31.424129 1734845 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.436384 1734845 pod_ready.go:92] pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:32.436405 1734845 pod_ready.go:81] duration metric: took 1.012268093s waiting for pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.436416 1734845 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.440968 1734845 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:32.440984 1734845 pod_ready.go:81] duration metric: took 4.56056ms waiting for pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.441001 1734845 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2wcnd" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.445649 1734845 pod_ready.go:92] pod "kube-proxy-2wcnd" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:32.445666 1734845 pod_ready.go:81] duration metric: took 4.656387ms waiting for pod "kube-proxy-2wcnd" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.445674 1734845 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.513148 1734845 pod_ready.go:92] pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:10:32.513166 1734845 pod_ready.go:81] duration metric: took 67.484919ms waiting for pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:32.513175 1734845 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace to be "Ready" ...
	I0817 03:10:34.918735 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:36.919489 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:39.422799 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:41.992397 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:44.427232 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:46.918669 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:48.918991 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:50.919214 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:53.421151 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:55.918970 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:10:58.420373 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:00.926327 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:03.424380 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:05.919407 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:08.419192 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:10.419678 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:12.918432 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:14.919796 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:17.418745 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:19.420001 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:21.918548 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:23.919596 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:26.419907 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:28.423199 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:30.919598 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:33.419084 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:35.422014 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:37.918235 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:39.919565 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:42.418423 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:44.418583 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:46.919733 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:49.420435 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:51.919638 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:53.923772 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:56.418260 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:11:58.423157 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:00.919146 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:03.418904 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:05.919056 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:08.418918 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:10.919142 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:13.418600 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:15.419023 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:17.919206 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:20.417842 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:22.418846 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:24.418955 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:26.919808 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:29.418685 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:31.418797 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:33.919198 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:36.418764 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:38.920189 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:41.418574 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:43.918305 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:45.919075 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:48.418664 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:50.919726 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:53.419069 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:55.919189 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:12:58.418564 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:00.919982 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:03.418698 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:05.919946 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:08.418314 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:10.418898 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:12.918325 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:14.920960 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:17.417730 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:19.418086 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:21.423527 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:23.919120 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:25.919234 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:28.418290 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:30.418955 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:32.918837 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:34.918880 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:36.919626 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:39.418715 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:41.919114 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:44.418476 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:46.418703 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:48.418748 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:50.918940 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:52.919109 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:54.924520 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:57.417571 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:13:59.418960 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:01.918890 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:03.918954 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:05.919306 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:08.418535 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:10.919974 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:13.417849 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:15.418070 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:17.418845 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:19.919781 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:22.418143 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:24.418467 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:26.919554 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:29.420231 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:31.919823 1734845 pod_ready.go:102] pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace has status "Ready":"False"
	I0817 03:14:32.914868 1734845 pod_ready.go:81] duration metric: took 4m0.401677923s waiting for pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace to be "Ready" ...
	E0817 03:14:32.914891 1734845 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-g4znl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0817 03:14:32.914910 1734845 pod_ready.go:38] duration metric: took 4m9.019812141s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:14:32.914976 1734845 kubeadm.go:604] restartCluster took 4m30.03130602s
	W0817 03:14:32.915121 1734845 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0817 03:14:32.915162 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0817 03:14:35.085846 1734845 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.170660328s)
	I0817 03:14:35.085904 1734845 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0817 03:14:35.095598 1734845 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 03:14:35.095652 1734845 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:14:35.117507 1734845 cri.go:76] found id: ""
	I0817 03:14:35.117555 1734845 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 03:14:35.123467 1734845 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 03:14:35.123518 1734845 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 03:14:35.129250 1734845 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 03:14:35.129282 1734845 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 03:14:35.417431 1734845 out.go:204]   - Generating certificates and keys ...
	I0817 03:14:37.693289 1734845 out.go:204]   - Booting up control plane ...
	I0817 03:14:53.789781 1734845 out.go:204]   - Configuring RBAC rules ...
	I0817 03:14:54.271378 1734845 cni.go:93] Creating CNI manager for ""
	I0817 03:14:54.271399 1734845 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:14:54.273315 1734845 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 03:14:54.273375 1734845 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 03:14:54.289118 1734845 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0817 03:14:54.289136 1734845 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 03:14:54.301138 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 03:14:54.523108 1734845 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 03:14:54.523216 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:54.523274 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=no-preload-20210817030748-1554185 minikube.k8s.io/updated_at=2021_08_17T03_14_54_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:54.649615 1734845 ops.go:34] apiserver oom_adj: -16
	I0817 03:14:54.649729 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:55.213823 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:55.714158 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:56.213831 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:56.713761 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:57.213588 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:57.713978 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:58.213925 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:58.714067 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:59.213543 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:14:59.713381 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:00.213285 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:00.714139 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:01.213576 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:01.713429 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:02.213362 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:02.713600 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:03.213826 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:03.713770 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:04.214126 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:04.714144 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:05.213303 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:05.713594 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:06.213610 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:06.713444 1734845 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:15:06.819820 1734845 kubeadm.go:985] duration metric: took 12.296644721s to wait for elevateKubeSystemPrivileges.
	I0817 03:15:06.819846 1734845 kubeadm.go:392] StartCluster complete in 5m3.993715814s
	I0817 03:15:06.819864 1734845 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:15:06.819944 1734845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:15:06.820939 1734845 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:15:07.350004 1734845 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210817030748-1554185" rescaled to 1
	I0817 03:15:07.350073 1734845 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0817 03:15:07.350125 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 03:15:07.350375 1734845 config.go:177] Loaded profile config "no-preload-20210817030748-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:15:07.350459 1734845 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0817 03:15:07.350509 1734845 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210817030748-1554185"
	I0817 03:15:07.350521 1734845 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210817030748-1554185"
	W0817 03:15:07.350526 1734845 addons.go:147] addon storage-provisioner should already be in state true
	I0817 03:15:07.350549 1734845 host.go:66] Checking if "no-preload-20210817030748-1554185" exists ...
	I0817 03:15:07.351057 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:15:07.353511 1734845 out.go:177] * Verifying Kubernetes components...
	I0817 03:15:07.353571 1734845 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:15:07.351676 1734845 addons.go:59] Setting metrics-server=true in profile "no-preload-20210817030748-1554185"
	I0817 03:15:07.353644 1734845 addons.go:135] Setting addon metrics-server=true in "no-preload-20210817030748-1554185"
	W0817 03:15:07.353655 1734845 addons.go:147] addon metrics-server should already be in state true
	I0817 03:15:07.353678 1734845 host.go:66] Checking if "no-preload-20210817030748-1554185" exists ...
	I0817 03:15:07.354126 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:15:07.351687 1734845 addons.go:59] Setting dashboard=true in profile "no-preload-20210817030748-1554185"
	I0817 03:15:07.354373 1734845 addons.go:135] Setting addon dashboard=true in "no-preload-20210817030748-1554185"
	W0817 03:15:07.354386 1734845 addons.go:147] addon dashboard should already be in state true
	I0817 03:15:07.354407 1734845 host.go:66] Checking if "no-preload-20210817030748-1554185" exists ...
	I0817 03:15:07.354872 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:15:07.351695 1734845 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210817030748-1554185"
	I0817 03:15:07.354952 1734845 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210817030748-1554185"
	I0817 03:15:07.355167 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:15:07.554093 1734845 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 03:15:07.554191 1734845 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:15:07.554200 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 03:15:07.554249 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:15:07.556650 1734845 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0817 03:15:07.556714 1734845 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 03:15:07.556727 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 03:15:07.556780 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:15:07.561054 1734845 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0817 03:15:07.563924 1734845 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0817 03:15:07.563974 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0817 03:15:07.563986 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0817 03:15:07.564043 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:15:07.674885 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:15:07.680743 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:15:07.689341 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:15:07.704911 1734845 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210817030748-1554185"
	W0817 03:15:07.704930 1734845 addons.go:147] addon default-storageclass should already be in state true
	I0817 03:15:07.704954 1734845 host.go:66] Checking if "no-preload-20210817030748-1554185" exists ...
	I0817 03:15:07.705396 1734845 cli_runner.go:115] Run: docker container inspect no-preload-20210817030748-1554185 --format={{.State.Status}}
	I0817 03:15:07.760674 1734845 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 03:15:07.760699 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 03:15:07.760750 1734845 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210817030748-1554185
	I0817 03:15:07.828462 1734845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50493 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210817030748-1554185/id_rsa Username:docker}
	I0817 03:15:07.875139 1734845 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210817030748-1554185" to be "Ready" ...
	I0817 03:15:07.877871 1734845 node_ready.go:49] node "no-preload-20210817030748-1554185" has status "Ready":"True"
	I0817 03:15:07.877883 1734845 node_ready.go:38] duration metric: took 2.71825ms waiting for node "no-preload-20210817030748-1554185" to be "Ready" ...
	I0817 03:15:07.877892 1734845 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:15:07.879278 1734845 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 03:15:07.884771 1734845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-255bv" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:08.039933 1734845 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 03:15:08.039955 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0817 03:15:08.143347 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0817 03:15:08.143400 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0817 03:15:08.157756 1734845 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:15:08.166354 1734845 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 03:15:08.208956 1734845 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 03:15:08.208980 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 03:15:08.307288 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0817 03:15:08.307312 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0817 03:15:08.521940 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0817 03:15:08.521964 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0817 03:15:08.541828 1734845 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 03:15:08.541851 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 03:15:08.600817 1734845 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 03:15:08.687224 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0817 03:15:08.687290 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0817 03:15:08.911998 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0817 03:15:08.912020 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0817 03:15:08.925863 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0817 03:15:08.925881 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0817 03:15:08.991825 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0817 03:15:08.991848 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0817 03:15:09.005005 1734845 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.125697496s)
	I0817 03:15:09.005031 1734845 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0817 03:15:09.062106 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0817 03:15:09.062127 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0817 03:15:09.077182 1734845 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 03:15:09.077201 1734845 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0817 03:15:09.120420 1734845 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 03:15:09.499719 1734845 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210817030748-1554185"
	I0817 03:15:09.942558 1734845 pod_ready.go:102] pod "coredns-78fcd69978-255bv" in "kube-system" namespace has status "Ready":"False"
	I0817 03:15:10.208986 1734845 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.088525772s)
	I0817 03:15:10.210980 1734845 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0817 03:15:10.211007 1734845 addons.go:344] enableAddons completed in 2.860552731s
	I0817 03:15:12.394778 1734845 pod_ready.go:102] pod "coredns-78fcd69978-255bv" in "kube-system" namespace has status "Ready":"False"
	I0817 03:15:14.395328 1734845 pod_ready.go:102] pod "coredns-78fcd69978-255bv" in "kube-system" namespace has status "Ready":"False"
	I0817 03:15:15.894556 1734845 pod_ready.go:92] pod "coredns-78fcd69978-255bv" in "kube-system" namespace has status "Ready":"True"
	I0817 03:15:15.894578 1734845 pod_ready.go:81] duration metric: took 8.009762641s waiting for pod "coredns-78fcd69978-255bv" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:15.894587 1734845 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-9dmpz" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.900840 1734845 pod_ready.go:97] error getting pod "coredns-78fcd69978-9dmpz" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-9dmpz" not found
	I0817 03:15:16.900870 1734845 pod_ready.go:81] duration metric: took 1.006275778s waiting for pod "coredns-78fcd69978-9dmpz" in "kube-system" namespace to be "Ready" ...
	E0817 03:15:16.900879 1734845 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-9dmpz" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-9dmpz" not found
	I0817 03:15:16.900886 1734845 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.904873 1734845 pod_ready.go:92] pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:15:16.904892 1734845 pod_ready.go:81] duration metric: took 3.996071ms waiting for pod "etcd-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.904904 1734845 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.908416 1734845 pod_ready.go:92] pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:15:16.908436 1734845 pod_ready.go:81] duration metric: took 3.523894ms waiting for pod "kube-apiserver-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.908444 1734845 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.912269 1734845 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:15:16.912290 1734845 pod_ready.go:81] duration metric: took 3.839026ms waiting for pod "kube-controller-manager-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.912300 1734845 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fd8hs" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.916134 1734845 pod_ready.go:92] pod "kube-proxy-fd8hs" in "kube-system" namespace has status "Ready":"True"
	I0817 03:15:16.916149 1734845 pod_ready.go:81] duration metric: took 3.815961ms waiting for pod "kube-proxy-fd8hs" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:16.916157 1734845 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:17.295352 1734845 pod_ready.go:92] pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace has status "Ready":"True"
	I0817 03:15:17.295374 1734845 pod_ready.go:81] duration metric: took 379.209584ms waiting for pod "kube-scheduler-no-preload-20210817030748-1554185" in "kube-system" namespace to be "Ready" ...
	I0817 03:15:17.295382 1734845 pod_ready.go:38] duration metric: took 9.417480039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:15:17.295425 1734845 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:15:17.295482 1734845 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:15:17.313198 1734845 api_server.go:70] duration metric: took 9.963092191s to wait for apiserver process to appear ...
	I0817 03:15:17.313220 1734845 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:15:17.313229 1734845 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:15:17.321429 1734845 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:15:17.322195 1734845 api_server.go:139] control plane version: v1.22.0-rc.0
	I0817 03:15:17.322234 1734845 api_server.go:129] duration metric: took 8.988733ms to wait for apiserver health ...
	I0817 03:15:17.322243 1734845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:15:17.495925 1734845 system_pods.go:59] 9 kube-system pods found
	I0817 03:15:17.495960 1734845 system_pods.go:61] "coredns-78fcd69978-255bv" [841a6924-fa23-40b4-b6b6-b9d024444fc5] Running
	I0817 03:15:17.495966 1734845 system_pods.go:61] "etcd-no-preload-20210817030748-1554185" [83af6952-189b-4ce5-8707-0d14b34f2838] Running
	I0817 03:15:17.495985 1734845 system_pods.go:61] "kindnet-r9qg6" [5210d460-fdab-41e1-ad63-b434142322d6] Running
	I0817 03:15:17.495997 1734845 system_pods.go:61] "kube-apiserver-no-preload-20210817030748-1554185" [e4347f56-14c8-4cd6-a3fc-9d8a0caf0a8f] Running
	I0817 03:15:17.496002 1734845 system_pods.go:61] "kube-controller-manager-no-preload-20210817030748-1554185" [f43867c6-c471-4383-81f9-5b8231a5b73c] Running
	I0817 03:15:17.496015 1734845 system_pods.go:61] "kube-proxy-fd8hs" [59bcc4a4-33b2-44c2-8da3-a777113aaf58] Running
	I0817 03:15:17.496033 1734845 system_pods.go:61] "kube-scheduler-no-preload-20210817030748-1554185" [116e59d0-36e8-41fa-bd94-80d88aa2b8ce] Running
	I0817 03:15:17.496046 1734845 system_pods.go:61] "metrics-server-7c784ccb57-wct66" [ffb75899-dc46-4aaa-945c-7b87ae2e020f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:15:17.496058 1734845 system_pods.go:61] "storage-provisioner" [7a0dec2f-5605-4e68-8128-d88da36ed6dd] Running
	I0817 03:15:17.496070 1734845 system_pods.go:74] duration metric: took 173.818396ms to wait for pod list to return data ...
	I0817 03:15:17.496081 1734845 default_sa.go:34] waiting for default service account to be created ...
	I0817 03:15:17.699897 1734845 default_sa.go:45] found service account: "default"
	I0817 03:15:17.699921 1734845 default_sa.go:55] duration metric: took 203.834329ms for default service account to be created ...
	I0817 03:15:17.699930 1734845 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 03:15:17.894490 1734845 system_pods.go:86] 9 kube-system pods found
	I0817 03:15:17.894524 1734845 system_pods.go:89] "coredns-78fcd69978-255bv" [841a6924-fa23-40b4-b6b6-b9d024444fc5] Running
	I0817 03:15:17.894531 1734845 system_pods.go:89] "etcd-no-preload-20210817030748-1554185" [83af6952-189b-4ce5-8707-0d14b34f2838] Running
	I0817 03:15:17.894536 1734845 system_pods.go:89] "kindnet-r9qg6" [5210d460-fdab-41e1-ad63-b434142322d6] Running
	I0817 03:15:17.894555 1734845 system_pods.go:89] "kube-apiserver-no-preload-20210817030748-1554185" [e4347f56-14c8-4cd6-a3fc-9d8a0caf0a8f] Running
	I0817 03:15:17.894583 1734845 system_pods.go:89] "kube-controller-manager-no-preload-20210817030748-1554185" [f43867c6-c471-4383-81f9-5b8231a5b73c] Running
	I0817 03:15:17.894595 1734845 system_pods.go:89] "kube-proxy-fd8hs" [59bcc4a4-33b2-44c2-8da3-a777113aaf58] Running
	I0817 03:15:17.894601 1734845 system_pods.go:89] "kube-scheduler-no-preload-20210817030748-1554185" [116e59d0-36e8-41fa-bd94-80d88aa2b8ce] Running
	I0817 03:15:17.894617 1734845 system_pods.go:89] "metrics-server-7c784ccb57-wct66" [ffb75899-dc46-4aaa-945c-7b87ae2e020f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 03:15:17.894628 1734845 system_pods.go:89] "storage-provisioner" [7a0dec2f-5605-4e68-8128-d88da36ed6dd] Running
	I0817 03:15:17.894636 1734845 system_pods.go:126] duration metric: took 194.70122ms to wait for k8s-apps to be running ...
	I0817 03:15:17.894658 1734845 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 03:15:17.894718 1734845 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:15:17.903613 1734845 system_svc.go:56] duration metric: took 8.962461ms WaitForService to wait for kubelet.
	I0817 03:15:17.903634 1734845 kubeadm.go:547] duration metric: took 10.553530802s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 03:15:17.903673 1734845 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:15:18.094517 1734845 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:15:18.094571 1734845 node_conditions.go:123] node cpu capacity is 2
	I0817 03:15:18.094594 1734845 node_conditions.go:105] duration metric: took 190.916323ms to run NodePressure ...
	I0817 03:15:18.094613 1734845 start.go:231] waiting for startup goroutines ...
	I0817 03:15:18.148145 1734845 start.go:462] kubectl: 1.21.3, cluster: 1.22.0-rc.0 (minor skew: 1)
	I0817 03:15:18.150374 1734845 out.go:177] * Done! kubectl is now configured to use "no-preload-20210817030748-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	8ca174a35e533       523cad1a4df73       17 seconds ago      Exited              dashboard-metrics-scraper   1                   57b52fd56663c
	1c1aaa8275d7f       85e6c0cff043f       22 seconds ago      Running             kubernetes-dashboard        0                   2d2f70d019019
	0afb6f1b41c82       66749159455b3       23 seconds ago      Running             storage-provisioner         0                   e8863073b7ce8
	555538de5495c       6d3ffc2696ac2       25 seconds ago      Running             coredns                     0                   26bc1107196ae
	664e7d8467668       5f7fafb97c956       26 seconds ago      Running             kube-proxy                  0                   e0eb4637af52a
	6975143cba57b       f37b7c809e5dc       26 seconds ago      Running             kindnet-cni                 0                   947112f02450c
	9f6890fa1f8d4       41065afd0ca8b       49 seconds ago      Running             kube-controller-manager     2                   72b86153695a4
	db8ca1b5b254e       82ecd1e357878       49 seconds ago      Running             kube-scheduler              2                   d80c4c007562b
	f77fa64a35d3f       2252d5eb703b0       49 seconds ago      Running             etcd                        2                   6269a024ac283
	aede5058d779c       6fe8178781397       49 seconds ago      Running             kube-apiserver              2                   411fb037027e5
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 03:09:46 UTC, end at Tue 2021-08-17 03:15:34 UTC. --
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.069313823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.070069179Z" level=info msg="PullImage \"k8s.gcr.io/echoserver:1.4\" returns image reference \"sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.074177478Z" level=info msg="CreateContainer within sandbox \"57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.387131127Z" level=info msg="CreateContainer within sandbox \"57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.387612968Z" level=info msg="StartContainer for \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.462785358Z" level=info msg="Finish piping stderr of container \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.463072108Z" level=info msg="Finish piping stdout of container \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.465435760Z" level=info msg="StartContainer for \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\" returns successfully"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.465570807Z" level=info msg="TaskExit event &TaskExit{ContainerID:8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9,ID:8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9,Pid:4762,ExitStatus:1,ExitedAt:2021-08-17 03:15:16.462898104 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.527830975Z" level=info msg="shim disconnected" id=8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.527985451Z" level=error msg="copy shim log" error="read /proc/self/fd/99: file already closed"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.736802220Z" level=info msg="CreateContainer within sandbox \"57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.755721634Z" level=info msg="CreateContainer within sandbox \"57b52fd56663ca1ab330f8d7382583f95f527bf09fae3707380f997cf3d5c116\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.756175636Z" level=info msg="StartContainer for \"8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.838907441Z" level=info msg="Finish piping stdout of container \"8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.839085400Z" level=info msg="Finish piping stderr of container \"8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1\""
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.843025758Z" level=info msg="StartContainer for \"8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1\" returns successfully"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.843113027Z" level=info msg="TaskExit event &TaskExit{ContainerID:8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1,ID:8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1,Pid:4830,ExitStatus:1,ExitedAt:2021-08-17 03:15:16.840489677 +0000 UTC,XXX_unrecognized:[],}"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.878964363Z" level=info msg="shim disconnected" id=8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1
	Aug 17 03:15:16 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:16.879017614Z" level=error msg="copy shim log" error="read /proc/self/fd/99: file already closed"
	Aug 17 03:15:17 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:17.745141895Z" level=info msg="RemoveContainer for \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\""
	Aug 17 03:15:17 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:17.756194969Z" level=info msg="RemoveContainer for \"8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9\" returns successfully"
	Aug 17 03:15:24 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:24.602405025Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 17 03:15:24 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:24.606450080Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Aug 17 03:15:24 no-preload-20210817030748-1554185 containerd[341]: time="2021-08-17T03:15:24.608537632Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	
	* 
	* ==> coredns [555538de5495cadb67dfc58796a361f1705dec99c90f64e7160947d8290f5564] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/arm64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20210817030748-1554185
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-20210817030748-1554185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=no-preload-20210817030748-1554185
	                    minikube.k8s.io/updated_at=2021_08_17T03_14_54_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 03:14:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20210817030748-1554185
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 03:15:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 03:15:05 +0000   Tue, 17 Aug 2021 03:14:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 03:15:05 +0000   Tue, 17 Aug 2021 03:14:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 03:15:05 +0000   Tue, 17 Aug 2021 03:14:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 03:15:05 +0000   Tue, 17 Aug 2021 03:15:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    no-preload-20210817030748-1554185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  81118084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033456Ki
	  pods:               110
	System Info:
	  Machine ID:                 add0925e5dc04b69af3049194672a64f
	  System UUID:                3aabbd15-2269-48c0-a588-935b665ad168
	  Boot ID:                    d38b6680-5851-4999-9143-dbb6b8b6a5f7
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.22.0-rc.0
	  Kube-Proxy Version:         v1.22.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-255bv                                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     28s
	  kube-system                 etcd-no-preload-20210817030748-1554185                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         41s
	  kube-system                 kindnet-r9qg6                                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      28s
	  kube-system                 kube-apiserver-no-preload-20210817030748-1554185             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-controller-manager-no-preload-20210817030748-1554185    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-proxy-fd8hs                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-scheduler-no-preload-20210817030748-1554185             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 metrics-server-7c784ccb57-wct66                              100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (3%!)(MISSING)       0 (0%!)(MISSING)         25s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-85bpp                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-l6fk8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             520Mi (6%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  51s (x5 over 51s)  kubelet  Node no-preload-20210817030748-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x4 over 51s)  kubelet  Node no-preload-20210817030748-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x4 over 51s)  kubelet  Node no-preload-20210817030748-1554185 status is now: NodeHasSufficientPID
	  Normal  Starting                 35s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s                kubelet  Node no-preload-20210817030748-1554185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s                kubelet  Node no-preload-20210817030748-1554185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s                kubelet  Node no-preload-20210817030748-1554185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                29s                kubelet  Node no-preload-20210817030748-1554185 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> etcd [f77fa64a35d3fb38a8a5161ff1091cdd2eb2c48e0acb110b4aced13260e3dc2d] <==
	* raft2021/08/17 03:14:44 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 03:14:44.525125 W | auth: simple token is not cryptographically signed
	2021-08-17 03:14:44.549678 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-17 03:14:44.557828 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2021-08-17 03:14:44.590323 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 03:14:44.590455 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/17 03:14:44 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 03:14:44.590699 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-17 03:14:44.590734 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/17 03:14:45 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 03:14:45 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 03:14:45 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 03:14:45 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 03:14:45 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 03:14:45.226827 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 03:14:45.227787 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 03:14:45.227869 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 03:14:45.227932 I | etcdserver: published {Name:no-preload-20210817030748-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 03:14:45.228068 I | embed: ready to serve client requests
	2021-08-17 03:14:45.229431 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 03:14:45.238907 I | embed: ready to serve client requests
	2021-08-17 03:14:45.324168 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 03:15:08.706172 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 03:15:18.365267 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 03:15:28.364964 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  03:15:34 up 10:57,  0 users,  load average: 3.25, 2.15, 1.78
	Linux no-preload-20210817030748-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [aede5058d779c358e247b2735f4ac4152e4f2b82234e975e24d36079d1b9b3e3] <==
	* I0817 03:14:51.167068       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0817 03:14:51.192382       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 03:14:51.194882       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 03:14:51.195307       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0817 03:14:51.865793       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 03:14:51.865818       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 03:14:51.896910       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0817 03:14:51.903213       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0817 03:14:51.903327       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 03:14:52.380908       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 03:14:52.411286       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 03:14:52.499569       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0817 03:14:52.500469       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 03:14:52.503958       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 03:14:53.120071       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 03:14:54.115578       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 03:14:54.236979       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 03:14:59.613535       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 03:15:06.637576       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0817 03:15:06.826987       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	E0817 03:15:09.513302       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	W0817 03:15:11.394581       1 handler_proxy.go:104] no RequestInfo found in the context
	E0817 03:15:11.394641       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 03:15:11.394648       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [9f6890fa1f8d4cbb618f7dc55910c20cb42c77b5ec4194dcadc0a2d92119bd18] <==
	* I0817 03:15:09.585383       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0817 03:15:09.685204       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0817 03:15:09.704288       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:15:09.719311       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:15:09.719630       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:15:09.750416       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:15:09.780136       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:15:09.781016       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:15:09.781462       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:15:09.781480       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:15:09.816741       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:15:09.817015       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:15:09.817049       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:15:09.817063       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:15:09.840456       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 03:15:09.840793       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:15:09.840829       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:15:09.840844       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:15:09.857028       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:15:09.857075       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 03:15:09.863427       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 03:15:09.863473       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 03:15:09.938021       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-85bpp"
	I0817 03:15:09.950876       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-l6fk8"
	I0817 03:15:10.925644       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [664e7d846766813360e1d299742608802ff4e3fb2d9a3b8b9fb4fca0e36efd44] <==
	* I0817 03:15:07.758888       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 03:15:07.758939       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 03:15:07.758953       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0817 03:15:07.840730       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 03:15:07.840762       1 server_others.go:212] Using iptables Proxier.
	I0817 03:15:07.840773       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 03:15:07.840789       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 03:15:07.841096       1 server.go:649] Version: v1.22.0-rc.0
	I0817 03:15:07.857151       1 config.go:315] Starting service config controller
	I0817 03:15:07.857163       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 03:15:07.857212       1 config.go:224] Starting endpoint slice config controller
	I0817 03:15:07.857215       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0817 03:15:07.862062       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210817030748-1554185.169bf994d49356c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03ee84ef227c2c9, ext:583304160, loc:(*time.Location)(0x2698ec0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210817030748-1554185", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"",
Name:"no-preload-20210817030748-1554185", UID:"no-preload-20210817030748-1554185", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210817030748-1554185.169bf994d49356c7" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0817 03:15:07.958864       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 03:15:07.959022       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [db8ca1b5b254e0ed7ae984f2b93106a57e5b18d2bbdbe7866702a9b275aa46d3] <==
	* W0817 03:14:51.031663       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 03:14:51.130483       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0817 03:14:51.130962       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0817 03:14:51.134011       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0817 03:14:51.136501       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0817 03:14:51.136529       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0817 03:14:51.151652       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 03:14:51.151786       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 03:14:51.151918       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 03:14:51.152020       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 03:14:51.152146       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 03:14:51.152251       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 03:14:51.152345       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 03:14:51.152470       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 03:14:51.152559       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 03:14:51.152635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 03:14:51.152786       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 03:14:51.152874       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 03:14:51.152898       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 03:14:51.159050       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 03:14:52.117892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 03:14:52.127031       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 03:14:52.181725       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 03:14:52.181950       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0817 03:14:55.337154       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 03:09:46 UTC, end at Tue 2021-08-17 03:15:34 UTC. --
	Aug 17 03:15:14 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:14.201950    3297 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff509214-3ed3-48e0-9233-0d56cc3583e8-config-volume\") on node \"no-preload-20210817030748-1554185\" DevicePath \"\""
	Aug 17 03:15:14 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:14.727101    3297 scope.go:110] "RemoveContainer" containerID="3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99"
	Aug 17 03:15:14 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:14.735192    3297 scope.go:110] "RemoveContainer" containerID="3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99"
	Aug 17 03:15:14 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:14.735840    3297 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99\": not found" containerID="3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99"
	Aug 17 03:15:14 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:14.735878    3297 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99} err="failed to get container status \"3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c6d7e45d6d7465eff42aaee33e72c1821565fbdcb9d7ea3752b36c0e7537a99\": not found"
	Aug 17 03:15:16 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:16.732898    3297 scope.go:110] "RemoveContainer" containerID="8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9"
	Aug 17 03:15:17 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:17.606324    3297 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ff509214-3ed3-48e0-9233-0d56cc3583e8 path="/var/lib/kubelet/pods/ff509214-3ed3-48e0-9233-0d56cc3583e8/volumes"
	Aug 17 03:15:17 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:17.743496    3297 scope.go:110] "RemoveContainer" containerID="8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9"
	Aug 17 03:15:17 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:17.743839    3297 scope.go:110] "RemoveContainer" containerID="8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1"
	Aug 17 03:15:17 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:17.744116    3297 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-85bpp_kubernetes-dashboard(dff4da5a-2bfd-4949-9ae0-1ac5b7d02599)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-85bpp" podUID=dff4da5a-2bfd-4949-9ae0-1ac5b7d02599
	Aug 17 03:15:17 no-preload-20210817030748-1554185 kubelet[3297]: W0817 03:15:17.937991    3297 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/poddff4da5a-2bfd-4949-9ae0-1ac5b7d02599/8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9 WatchSource:0}: container "8793f3aa8de66a39d09b2fa8613ad3b217394a1f438b57581451baede824b8a9" in namespace "k8s.io": not found
	Aug 17 03:15:18 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:18.746507    3297 scope.go:110] "RemoveContainer" containerID="8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1"
	Aug 17 03:15:18 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:18.746796    3297 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-85bpp_kubernetes-dashboard(dff4da5a-2bfd-4949-9ae0-1ac5b7d02599)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-85bpp" podUID=dff4da5a-2bfd-4949-9ae0-1ac5b7d02599
	Aug 17 03:15:19 no-preload-20210817030748-1554185 kubelet[3297]: W0817 03:15:19.444121    3297 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/poddff4da5a-2bfd-4949-9ae0-1ac5b7d02599/8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1 WatchSource:0}: task 8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1 not found: not found
	Aug 17 03:15:19 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:19.956731    3297 scope.go:110] "RemoveContainer" containerID="8ca174a35e53340e0d2c801f8b5e34dd7b44fbe2a9d576f216861137075bf0a1"
	Aug 17 03:15:19 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:19.957035    3297 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-85bpp_kubernetes-dashboard(dff4da5a-2bfd-4949-9ae0-1ac5b7d02599)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-85bpp" podUID=dff4da5a-2bfd-4949-9ae0-1ac5b7d02599
	Aug 17 03:15:20 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:20.607398    3297 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod7a0dec2f-5605-4e68-8128-d88da36ed6dd\": RecentStats: unable to find data in memory cache]"
	Aug 17 03:15:24 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:24.608709    3297 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 03:15:24 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:24.608753    3297 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 17 03:15:24 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:24.608857    3297 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nqc4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-wct66_kube-system(ffb75899-dc46-4aaa-945c-7b87ae2e020f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
	Aug 17 03:15:24 no-preload-20210817030748-1554185 kubelet[3297]: E0817 03:15:24.608900    3297 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-7c784ccb57-wct66" podUID=ffb75899-dc46-4aaa-945c-7b87ae2e020f
	Aug 17 03:15:29 no-preload-20210817030748-1554185 kubelet[3297]: I0817 03:15:29.437481    3297 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 17 03:15:29 no-preload-20210817030748-1554185 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 17 03:15:29 no-preload-20210817030748-1554185 systemd[1]: kubelet.service: Succeeded.
	Aug 17 03:15:29 no-preload-20210817030748-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [1c1aaa8275d7fcfa231d3410f2712a19cf1f50fb2d9f0b67a60a621442a6ba8d] <==
	* 2021/08/17 03:15:12 Starting overwatch
	2021/08/17 03:15:12 Using namespace: kubernetes-dashboard
	2021/08/17 03:15:12 Using in-cluster config to connect to apiserver
	2021/08/17 03:15:12 Using secret token for csrf signing
	2021/08/17 03:15:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/17 03:15:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/17 03:15:12 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/17 03:15:12 Generating JWE encryption key
	2021/08/17 03:15:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/17 03:15:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/17 03:15:12 Initializing JWE encryption key from synchronized object
	2021/08/17 03:15:12 Creating in-cluster Sidecar client
	2021/08/17 03:15:12 Serving insecurely on HTTP port: 9090
	2021/08/17 03:15:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [0afb6f1b41c8225e19c860ceb823eaa1926b9484d3e92c552cc5068fb04f25fe] <==
	* I0817 03:15:10.404753       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 03:15:10.425203       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 03:15:10.425420       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 03:15:10.432298       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 03:15:10.432420       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20210817030748-1554185_a1d6c892-8e55-4645-94a1-949ba8f71cee!
	I0817 03:15:10.433196       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4650a313-4fc6-4e41-a1e6-959a74610025", APIVersion:"v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20210817030748-1554185_a1d6c892-8e55-4645-94a1-949ba8f71cee became leader
	I0817 03:15:10.533219       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20210817030748-1554185_a1d6c892-8e55-4645-94a1-949ba8f71cee!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-20210817030748-1554185 -n no-preload-20210817030748-1554185
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-20210817030748-1554185 -n no-preload-20210817030748-1554185: exit status 2 (350.701411ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context no-preload-20210817030748-1554185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-wct66
helpers_test.go:273: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context no-preload-20210817030748-1554185 describe pod metrics-server-7c784ccb57-wct66
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20210817030748-1554185 describe pod metrics-server-7c784ccb57-wct66: exit status 1 (118.989455ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-wct66" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context no-preload-20210817030748-1554185 describe pod metrics-server-7c784ccb57-wct66: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (24.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-20210817031538-1554185 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-20210817031538-1554185 --alsologtostderr -v=1: exit status 80 (1.948187689s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-20210817031538-1554185 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 03:18:05.262671 1760466 out.go:298] Setting OutFile to fd 1 ...
	I0817 03:18:05.262759 1760466 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:18:05.262770 1760466 out.go:311] Setting ErrFile to fd 2...
	I0817 03:18:05.262773 1760466 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:18:05.262917 1760466 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 03:18:05.263096 1760466 out.go:305] Setting JSON to false
	I0817 03:18:05.263126 1760466 mustload.go:65] Loading cluster: newest-cni-20210817031538-1554185
	I0817 03:18:05.263498 1760466 config.go:177] Loaded profile config "newest-cni-20210817031538-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:18:05.263984 1760466 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:18:05.302916 1760466 host.go:66] Checking if "newest-cni-20210817031538-1554185" exists ...
	I0817 03:18:05.303604 1760466 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-20210817031538-1554185 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0817 03:18:05.307174 1760466 out.go:177] * Pausing node newest-cni-20210817031538-1554185 ... 
	I0817 03:18:05.307201 1760466 host.go:66] Checking if "newest-cni-20210817031538-1554185" exists ...
	I0817 03:18:05.307469 1760466 ssh_runner.go:149] Run: systemctl --version
	I0817 03:18:05.307505 1760466 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:18:05.344189 1760466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:18:05.445925 1760466 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:18:05.453824 1760466 pause.go:50] kubelet running: true
	I0817 03:18:05.453894 1760466 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 03:18:05.617596 1760466 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 03:18:05.617672 1760466 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 03:18:05.687942 1760466 cri.go:76] found id: "3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180"
	I0817 03:18:05.687959 1760466 cri.go:76] found id: "ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d"
	I0817 03:18:05.687969 1760466 cri.go:76] found id: "482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9"
	I0817 03:18:05.687973 1760466 cri.go:76] found id: "6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae"
	I0817 03:18:05.687977 1760466 cri.go:76] found id: "ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935"
	I0817 03:18:05.687982 1760466 cri.go:76] found id: "4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727"
	I0817 03:18:05.687986 1760466 cri.go:76] found id: "cc900511df519a93f555f3416a221d935c394528a2061fe6e527c0d2a927e9b2"
	I0817 03:18:05.687991 1760466 cri.go:76] found id: "c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a"
	I0817 03:18:05.687995 1760466 cri.go:76] found id: "ef9c6f6e4c7fca951fc6ee6b33eeda5a8114bcd5b931915f7e4c7bafee6857da"
	I0817 03:18:05.688001 1760466 cri.go:76] found id: "0095f08395273c8984cca9e310f7ff709d93e7a80eea7ff384fe93ff0ca3f587"
	I0817 03:18:05.688005 1760466 cri.go:76] found id: "21668b68e79b6a717d938adec524630fe75caae401a2caf1d1d8a4ea4a361158"
	I0817 03:18:05.688010 1760466 cri.go:76] found id: "a2bfe6f66f2f948ce0f8370ecadfe2dff535f02ce23800223a2bf18c52238e58"
	I0817 03:18:05.688014 1760466 cri.go:76] found id: ""
	I0817 03:18:05.688053 1760466 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:18:05.720098 1760466 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b","pid":863,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b/rootfs","created":"2021-08-17T03:17:53.584918616Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210817031538-1554185_89facb261023507d1607ed4be0355294"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180","pid":1243,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3480b6e9d180
1dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180/rootfs","created":"2021-08-17T03:18:02.893838308Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9","pid":1055,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9/rootfs","created":"2021-08-17T03:17:53.927987527Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.
cri.sandbox-id":"c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727","pid":1008,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727/rootfs","created":"2021-08-17T03:17:53.854978976Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae","pid":1024,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae","r
ootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae/rootfs","created":"2021-08-17T03:17:53.839447465Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d","pid":1257,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d/rootfs","created":"2021-08-17T03:18:02.895743823Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad
02a9f05f8a4075"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935","pid":1001,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935/rootfs","created":"2021-08-17T03:17:53.859690313Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038","pid":906,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.i
o/ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038/rootfs","created":"2021-08-17T03:17:53.629773276Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210817031538-1554185_b96a20828c8e2f9eea117dba0ac4a7a2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05","pid":913,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05/rootfs","created":"2021-08-17T03:17:53.637610079Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd2
81f2c4effb2e770731c05","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210817031538-1554185_4edc3ccdb375133081352b5372610d48"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44","pid":881,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44/rootfs","created":"2021-08-17T03:17:53.609153226Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210817031538-1554185_3b018754fa8d4f77e016923e2f4bd265"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e81492055fbc1734a279e9bf49c0402596841698dd8fb887
ad02a9f05f8a4075","pid":1177,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075/rootfs","created":"2021-08-17T03:18:02.674386109Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-clj8s_929df3e0-ea05-4a55-b16d-ac959dbf86a7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217","pid":1176,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe7215298d43c641e1afd3610bbda6ea9db22c1a88da41796
9a43c130cad7217/rootfs","created":"2021-08-17T03:18:02.691534186Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-w8m9q_caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf"},"owner":"root"}]
	I0817 03:18:05.720280 1760466 cri.go:113] list returned 12 containers
	I0817 03:18:05.720292 1760466 cri.go:116] container: {ID:085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b Status:running}
	I0817 03:18:05.720308 1760466 cri.go:118] skipping 085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b - not in ps
	I0817 03:18:05.720319 1760466 cri.go:116] container: {ID:3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180 Status:running}
	I0817 03:18:05.720324 1760466 cri.go:116] container: {ID:482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9 Status:running}
	I0817 03:18:05.720331 1760466 cri.go:116] container: {ID:4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727 Status:running}
	I0817 03:18:05.720339 1760466 cri.go:116] container: {ID:6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae Status:running}
	I0817 03:18:05.720344 1760466 cri.go:116] container: {ID:ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d Status:running}
	I0817 03:18:05.720351 1760466 cri.go:116] container: {ID:ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935 Status:running}
	I0817 03:18:05.720356 1760466 cri.go:116] container: {ID:ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038 Status:running}
	I0817 03:18:05.720362 1760466 cri.go:118] skipping ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038 - not in ps
	I0817 03:18:05.720370 1760466 cri.go:116] container: {ID:c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05 Status:running}
	I0817 03:18:05.720376 1760466 cri.go:118] skipping c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05 - not in ps
	I0817 03:18:05.720385 1760466 cri.go:116] container: {ID:e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44 Status:running}
	I0817 03:18:05.720390 1760466 cri.go:118] skipping e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44 - not in ps
	I0817 03:18:05.720394 1760466 cri.go:116] container: {ID:e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075 Status:running}
	I0817 03:18:05.720399 1760466 cri.go:118] skipping e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075 - not in ps
	I0817 03:18:05.720407 1760466 cri.go:116] container: {ID:fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217 Status:running}
	I0817 03:18:05.720413 1760466 cri.go:118] skipping fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217 - not in ps
	I0817 03:18:05.720456 1760466 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180
	I0817 03:18:05.744601 1760466 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180 482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9
	I0817 03:18:05.765638 1760466 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180 482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T03:18:05Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0817 03:18:06.042474 1760466 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:18:06.051932 1760466 pause.go:50] kubelet running: false
	I0817 03:18:06.051985 1760466 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 03:18:06.153804 1760466 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 03:18:06.153864 1760466 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 03:18:06.230230 1760466 cri.go:76] found id: "3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180"
	I0817 03:18:06.230249 1760466 cri.go:76] found id: "ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d"
	I0817 03:18:06.230260 1760466 cri.go:76] found id: "482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9"
	I0817 03:18:06.230265 1760466 cri.go:76] found id: "6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae"
	I0817 03:18:06.230270 1760466 cri.go:76] found id: "ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935"
	I0817 03:18:06.230275 1760466 cri.go:76] found id: "4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727"
	I0817 03:18:06.230283 1760466 cri.go:76] found id: "cc900511df519a93f555f3416a221d935c394528a2061fe6e527c0d2a927e9b2"
	I0817 03:18:06.230287 1760466 cri.go:76] found id: "c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a"
	I0817 03:18:06.230292 1760466 cri.go:76] found id: "ef9c6f6e4c7fca951fc6ee6b33eeda5a8114bcd5b931915f7e4c7bafee6857da"
	I0817 03:18:06.230299 1760466 cri.go:76] found id: "0095f08395273c8984cca9e310f7ff709d93e7a80eea7ff384fe93ff0ca3f587"
	I0817 03:18:06.230309 1760466 cri.go:76] found id: "21668b68e79b6a717d938adec524630fe75caae401a2caf1d1d8a4ea4a361158"
	I0817 03:18:06.230314 1760466 cri.go:76] found id: "a2bfe6f66f2f948ce0f8370ecadfe2dff535f02ce23800223a2bf18c52238e58"
	I0817 03:18:06.230327 1760466 cri.go:76] found id: ""
	I0817 03:18:06.230366 1760466 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:18:06.262740 1760466 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b","pid":863,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b/rootfs","created":"2021-08-17T03:17:53.584918616Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210817031538-1554185_89facb261023507d1607ed4be0355294"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180","pid":1243,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3480b6e9d1801
dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180/rootfs","created":"2021-08-17T03:18:02.893838308Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9","pid":1055,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9/rootfs","created":"2021-08-17T03:17:53.927987527Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.c
ri.sandbox-id":"c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727","pid":1008,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727/rootfs","created":"2021-08-17T03:17:53.854978976Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae","pid":1024,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae","ro
otfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae/rootfs","created":"2021-08-17T03:17:53.839447465Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d","pid":1257,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d/rootfs","created":"2021-08-17T03:18:02.895743823Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad0
2a9f05f8a4075"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935","pid":1001,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935/rootfs","created":"2021-08-17T03:17:53.859690313Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038","pid":906,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io
/ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038/rootfs","created":"2021-08-17T03:17:53.629773276Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210817031538-1554185_b96a20828c8e2f9eea117dba0ac4a7a2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05","pid":913,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05/rootfs","created":"2021-08-17T03:17:53.637610079Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd28
1f2c4effb2e770731c05","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210817031538-1554185_4edc3ccdb375133081352b5372610d48"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44","pid":881,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44/rootfs","created":"2021-08-17T03:17:53.609153226Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210817031538-1554185_3b018754fa8d4f77e016923e2f4bd265"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e81492055fbc1734a279e9bf49c0402596841698dd8fb887a
d02a9f05f8a4075","pid":1177,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075/rootfs","created":"2021-08-17T03:18:02.674386109Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-clj8s_929df3e0-ea05-4a55-b16d-ac959dbf86a7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217","pid":1176,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969
a43c130cad7217/rootfs","created":"2021-08-17T03:18:02.691534186Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-w8m9q_caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf"},"owner":"root"}]
	I0817 03:18:06.262915 1760466 cri.go:113] list returned 12 containers
	I0817 03:18:06.262929 1760466 cri.go:116] container: {ID:085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b Status:running}
	I0817 03:18:06.262939 1760466 cri.go:118] skipping 085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b - not in ps
	I0817 03:18:06.262948 1760466 cri.go:116] container: {ID:3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180 Status:paused}
	I0817 03:18:06.262956 1760466 cri.go:122] skipping {3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180 paused}: state = "paused", want "running"
	I0817 03:18:06.262975 1760466 cri.go:116] container: {ID:482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9 Status:running}
	I0817 03:18:06.262981 1760466 cri.go:116] container: {ID:4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727 Status:running}
	I0817 03:18:06.262986 1760466 cri.go:116] container: {ID:6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae Status:running}
	I0817 03:18:06.262996 1760466 cri.go:116] container: {ID:ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d Status:running}
	I0817 03:18:06.263001 1760466 cri.go:116] container: {ID:ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935 Status:running}
	I0817 03:18:06.263011 1760466 cri.go:116] container: {ID:ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038 Status:running}
	I0817 03:18:06.263016 1760466 cri.go:118] skipping ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038 - not in ps
	I0817 03:18:06.263022 1760466 cri.go:116] container: {ID:c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05 Status:running}
	I0817 03:18:06.263029 1760466 cri.go:118] skipping c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05 - not in ps
	I0817 03:18:06.263033 1760466 cri.go:116] container: {ID:e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44 Status:running}
	I0817 03:18:06.263039 1760466 cri.go:118] skipping e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44 - not in ps
	I0817 03:18:06.263043 1760466 cri.go:116] container: {ID:e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075 Status:running}
	I0817 03:18:06.263048 1760466 cri.go:118] skipping e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075 - not in ps
	I0817 03:18:06.263055 1760466 cri.go:116] container: {ID:fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217 Status:running}
	I0817 03:18:06.263063 1760466 cri.go:118] skipping fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217 - not in ps
	I0817 03:18:06.263101 1760466 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9
	I0817 03:18:06.276452 1760466 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9 4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727
	I0817 03:18:06.287641 1760466 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9 4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T03:18:06Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0817 03:18:06.828873 1760466 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:18:06.840452 1760466 pause.go:50] kubelet running: false
	I0817 03:18:06.840527 1760466 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 03:18:06.985390 1760466 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0817 03:18:06.985497 1760466 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0817 03:18:07.080464 1760466 cri.go:76] found id: "3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180"
	I0817 03:18:07.080489 1760466 cri.go:76] found id: "ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d"
	I0817 03:18:07.080494 1760466 cri.go:76] found id: "482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9"
	I0817 03:18:07.080500 1760466 cri.go:76] found id: "6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae"
	I0817 03:18:07.080504 1760466 cri.go:76] found id: "ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935"
	I0817 03:18:07.080509 1760466 cri.go:76] found id: "4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727"
	I0817 03:18:07.080514 1760466 cri.go:76] found id: "cc900511df519a93f555f3416a221d935c394528a2061fe6e527c0d2a927e9b2"
	I0817 03:18:07.080519 1760466 cri.go:76] found id: "c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a"
	I0817 03:18:07.080525 1760466 cri.go:76] found id: "ef9c6f6e4c7fca951fc6ee6b33eeda5a8114bcd5b931915f7e4c7bafee6857da"
	I0817 03:18:07.080533 1760466 cri.go:76] found id: "0095f08395273c8984cca9e310f7ff709d93e7a80eea7ff384fe93ff0ca3f587"
	I0817 03:18:07.080541 1760466 cri.go:76] found id: "21668b68e79b6a717d938adec524630fe75caae401a2caf1d1d8a4ea4a361158"
	I0817 03:18:07.080546 1760466 cri.go:76] found id: "a2bfe6f66f2f948ce0f8370ecadfe2dff535f02ce23800223a2bf18c52238e58"
	I0817 03:18:07.080552 1760466 cri.go:76] found id: ""
	I0817 03:18:07.080591 1760466 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:18:07.113935 1760466 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b","pid":863,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b/rootfs","created":"2021-08-17T03:17:53.584918616Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210817031538-1554185_89facb261023507d1607ed4be0355294"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180","pid":1243,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3480b6e9d1801
dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180/rootfs","created":"2021-08-17T03:18:02.893838308Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9","pid":1055,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9/rootfs","created":"2021-08-17T03:17:53.927987527Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cr
i.sandbox-id":"c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727","pid":1008,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727/rootfs","created":"2021-08-17T03:17:53.854978976Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae","pid":1024,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae","roo
tfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae/rootfs","created":"2021-08-17T03:17:53.839447465Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d","pid":1257,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d/rootfs","created":"2021-08-17T03:18:02.895743823Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02
a9f05f8a4075"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935","pid":1001,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935/rootfs","created":"2021-08-17T03:17:53.859690313Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038","pid":906,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/
ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038/rootfs","created":"2021-08-17T03:17:53.629773276Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210817031538-1554185_b96a20828c8e2f9eea117dba0ac4a7a2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05","pid":913,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05/rootfs","created":"2021-08-17T03:17:53.637610079Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281
f2c4effb2e770731c05","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210817031538-1554185_4edc3ccdb375133081352b5372610d48"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44","pid":881,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44/rootfs","created":"2021-08-17T03:17:53.609153226Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210817031538-1554185_3b018754fa8d4f77e016923e2f4bd265"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad
02a9f05f8a4075","pid":1177,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075/rootfs","created":"2021-08-17T03:18:02.674386109Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-clj8s_929df3e0-ea05-4a55-b16d-ac959dbf86a7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217","pid":1176,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a
43c130cad7217/rootfs","created":"2021-08-17T03:18:02.691534186Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-w8m9q_caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf"},"owner":"root"}]
	I0817 03:18:07.114151 1760466 cri.go:113] list returned 12 containers
	I0817 03:18:07.114168 1760466 cri.go:116] container: {ID:085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b Status:running}
	I0817 03:18:07.114180 1760466 cri.go:118] skipping 085c3ba8f1d3f8f4422058f34d0b1da20f8fbeb137762c2020cda1ce726ee21b - not in ps
	I0817 03:18:07.114191 1760466 cri.go:116] container: {ID:3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180 Status:paused}
	I0817 03:18:07.114197 1760466 cri.go:122] skipping {3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180 paused}: state = "paused", want "running"
	I0817 03:18:07.114214 1760466 cri.go:116] container: {ID:482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9 Status:paused}
	I0817 03:18:07.114220 1760466 cri.go:122] skipping {482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9 paused}: state = "paused", want "running"
	I0817 03:18:07.114225 1760466 cri.go:116] container: {ID:4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727 Status:running}
	I0817 03:18:07.114235 1760466 cri.go:116] container: {ID:6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae Status:running}
	I0817 03:18:07.114241 1760466 cri.go:116] container: {ID:ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d Status:running}
	I0817 03:18:07.114250 1760466 cri.go:116] container: {ID:ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935 Status:running}
	I0817 03:18:07.114255 1760466 cri.go:116] container: {ID:ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038 Status:running}
	I0817 03:18:07.114261 1760466 cri.go:118] skipping ac78a36df6a81748e2b4860277773b0dd6723cf22540e6dff0bca14c0afd6038 - not in ps
	I0817 03:18:07.114269 1760466 cri.go:116] container: {ID:c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05 Status:running}
	I0817 03:18:07.114274 1760466 cri.go:118] skipping c0a8d7a4730f4281d5d41b028025ad7990d1ae41dd281f2c4effb2e770731c05 - not in ps
	I0817 03:18:07.114280 1760466 cri.go:116] container: {ID:e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44 Status:running}
	I0817 03:18:07.114289 1760466 cri.go:118] skipping e285d26d47a9f03f487c457d8fec164de84651e483238f41f03e30c8c2372a44 - not in ps
	I0817 03:18:07.114296 1760466 cri.go:116] container: {ID:e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075 Status:running}
	I0817 03:18:07.114308 1760466 cri.go:118] skipping e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075 - not in ps
	I0817 03:18:07.114312 1760466 cri.go:116] container: {ID:fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217 Status:running}
	I0817 03:18:07.114318 1760466 cri.go:118] skipping fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217 - not in ps
	I0817 03:18:07.114361 1760466 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727
	I0817 03:18:07.127876 1760466 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727 6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae
	I0817 03:18:07.141224 1760466 out.go:177] 
	W0817 03:18:07.141342 1760466 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727 6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T03:18:07Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727 6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T03:18:07Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0817 03:18:07.141362 1760466 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0817 03:18:07.149019 1760466 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_3.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_3.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0817 03:18:07.150590 1760466 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-arm64 pause -p newest-cni-20210817031538-1554185 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect newest-cni-20210817031538-1554185
helpers_test.go:236: (dbg) docker inspect newest-cni-20210817031538-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00337eb3ef8aaf02771a127a8c204215c2824a832f8a02c76503b37a775fbb45",
	        "Created": "2021-08-17T03:15:40.389083806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1757577,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T03:17:25.598212216Z",
	            "FinishedAt": "2021-08-17T03:17:24.280896174Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/00337eb3ef8aaf02771a127a8c204215c2824a832f8a02c76503b37a775fbb45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00337eb3ef8aaf02771a127a8c204215c2824a832f8a02c76503b37a775fbb45/hostname",
	        "HostsPath": "/var/lib/docker/containers/00337eb3ef8aaf02771a127a8c204215c2824a832f8a02c76503b37a775fbb45/hosts",
	        "LogPath": "/var/lib/docker/containers/00337eb3ef8aaf02771a127a8c204215c2824a832f8a02c76503b37a775fbb45/00337eb3ef8aaf02771a127a8c204215c2824a832f8a02c76503b37a775fbb45-json.log",
	        "Name": "/newest-cni-20210817031538-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-20210817031538-1554185:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20210817031538-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4d5fc6a736ab98ef92456d43ae6dc75e59b09f76be0acc4655fb84146a0109d9-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d5fc6a736ab98ef92456d43ae6dc75e59b09f76be0acc4655fb84146a0109d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d5fc6a736ab98ef92456d43ae6dc75e59b09f76be0acc4655fb84146a0109d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d5fc6a736ab98ef92456d43ae6dc75e59b09f76be0acc4655fb84146a0109d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20210817031538-1554185",
	                "Source": "/var/lib/docker/volumes/newest-cni-20210817031538-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20210817031538-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20210817031538-1554185",
	                "name.minikube.sigs.k8s.io": "newest-cni-20210817031538-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd4213be6abb1c5cd4a9ea453ed9a6707bf98bacc73bf0a3d06b9fb21dfcaa6e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50503"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50502"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50499"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50501"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50500"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cd4213be6abb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20210817031538-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "00337eb3ef8a",
	                        "newest-cni-20210817031538-1554185"
	                    ],
	                    "NetworkID": "d76af0dbb8c0f4682496f6ba0caf4de1b85120cf92e723f44f1621b8c2b2362f",
	                    "EndpointID": "e6934544be6a7ed5427c5032011f456a7c842dac3ee5c5171b8500f2dc5abaa8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210817031538-1554185 -n newest-cni-20210817031538-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210817031538-1554185 -n newest-cni-20210817031538-1554185: exit status 2 (296.567654ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-20210817031538-1554185 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 -p newest-cni-20210817031538-1554185 logs -n 25: exit status 110 (10.981546909s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                   Profile                    |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|----------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:22 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                              |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                              |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:07:27 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                              |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                              |         |         |                               |                               |
	|         | --driver=docker                                            |                                              |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                              |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:38 UTC | Tue, 17 Aug 2021 03:07:38 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                              |         |         |                               |                               |
	| -p      | embed-certs-20210817025908-1554185                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:40 UTC | Tue, 17 Aug 2021 03:07:41 UTC |
	|         | logs -n 25                                                 |                                              |         |         |                               |                               |
	| -p      | embed-certs-20210817025908-1554185                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:42 UTC | Tue, 17 Aug 2021 03:07:43 UTC |
	|         | logs -n 25                                                 |                                              |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:44 UTC | Tue, 17 Aug 2021 03:07:47 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:47 UTC | Tue, 17 Aug 2021 03:07:48 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20210817030748-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:48 UTC | Tue, 17 Aug 2021 03:07:48 UTC |
	|         | disable-driver-mounts-20210817030748-1554185               |                                              |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:48 UTC | Tue, 17 Aug 2021 03:09:14 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                              |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                              |         |         |                               |                               |
	|         | --driver=docker                                            |                                              |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                              |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:24 UTC | Tue, 17 Aug 2021 03:09:24 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                              |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                              |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:25 UTC | Tue, 17 Aug 2021 03:09:45 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                              |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:45 UTC | Tue, 17 Aug 2021 03:09:45 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                              |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:45 UTC | Tue, 17 Aug 2021 03:15:18 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                              |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                              |         |         |                               |                               |
	|         | --driver=docker                                            |                                              |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                              |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:28 UTC | Tue, 17 Aug 2021 03:15:29 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                              |         |         |                               |                               |
	| -p      | no-preload-20210817030748-1554185                          | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:31 UTC | Tue, 17 Aug 2021 03:15:32 UTC |
	|         | logs -n 25                                                 |                                              |         |         |                               |                               |
	| -p      | no-preload-20210817030748-1554185                          | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:33 UTC | Tue, 17 Aug 2021 03:15:34 UTC |
	|         | logs -n 25                                                 |                                              |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:35 UTC | Tue, 17 Aug 2021 03:15:38 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:38 UTC | Tue, 17 Aug 2021 03:15:38 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	| start   | -p newest-cni-20210817031538-1554185 --memory=2200         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:39 UTC | Tue, 17 Aug 2021 03:17:03 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                              |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                              |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                              |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                              |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                              |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:17:03 UTC | Tue, 17 Aug 2021 03:17:04 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                              |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                              |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:17:04 UTC | Tue, 17 Aug 2021 03:17:24 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                              |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:17:24 UTC | Tue, 17 Aug 2021 03:17:24 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                              |         |         |                               |                               |
	| start   | -p newest-cni-20210817031538-1554185 --memory=2200         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:17:24 UTC | Tue, 17 Aug 2021 03:18:04 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                              |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                              |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                              |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                              |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                              |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:18:04 UTC | Tue, 17 Aug 2021 03:18:05 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                              |         |         |                               |                               |
	|---------|------------------------------------------------------------|----------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 03:17:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 03:17:24.818699 1757367 out.go:298] Setting OutFile to fd 1 ...
	I0817 03:17:24.818847 1757367 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:17:24.818875 1757367 out.go:311] Setting ErrFile to fd 2...
	I0817 03:17:24.818891 1757367 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:17:24.819041 1757367 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 03:17:24.819309 1757367 out.go:305] Setting JSON to false
	I0817 03:17:24.820560 1757367 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39583,"bootTime":1629130662,"procs":431,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 03:17:24.820655 1757367 start.go:121] virtualization:  
	I0817 03:17:24.823573 1757367 out.go:177] * [newest-cni-20210817031538-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 03:17:24.825366 1757367 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 03:17:24.823731 1757367 notify.go:169] Checking for updates...
	I0817 03:17:24.827450 1757367 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:17:24.829313 1757367 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 03:17:24.831460 1757367 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 03:17:24.831908 1757367 config.go:177] Loaded profile config "newest-cni-20210817031538-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:17:24.832380 1757367 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 03:17:24.902448 1757367 docker.go:132] docker version: linux-20.10.8
	I0817 03:17:24.902550 1757367 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:17:25.046236 1757367 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:17:24.961542715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 03:17:25.046338 1757367 docker.go:244] overlay module found
	I0817 03:17:25.048539 1757367 out.go:177] * Using the docker driver based on existing profile
	I0817 03:17:25.048564 1757367 start.go:278] selected driver: docker
	I0817 03:17:25.048570 1757367 start.go:751] validating driver "docker" against &{Name:newest-cni-20210817031538-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817031538-1554185 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain]
VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:17:25.048677 1757367 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 03:17:25.048717 1757367 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:17:25.048732 1757367 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:17:25.050546 1757367 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:17:25.050875 1757367 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:17:25.129226 1757367 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:17:25.078286511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	W0817 03:17:25.129350 1757367 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:17:25.129370 1757367 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:17:25.132139 1757367 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:17:25.132231 1757367 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0817 03:17:25.132254 1757367 cni.go:93] Creating CNI manager for ""
	I0817 03:17:25.132261 1757367 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:17:25.132276 1757367 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 03:17:25.132283 1757367 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 03:17:25.132292 1757367 start_flags.go:277] config:
	{Name:newest-cni-20210817031538-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817031538-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:fal
se kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:17:25.134333 1757367 out.go:177] * Starting control plane node newest-cni-20210817031538-1554185 in cluster newest-cni-20210817031538-1554185
	I0817 03:17:25.134366 1757367 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 03:17:25.136042 1757367 out.go:177] * Pulling base image ...
	I0817 03:17:25.136063 1757367 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0817 03:17:25.136092 1757367 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4
	I0817 03:17:25.136108 1757367 cache.go:56] Caching tarball of preloaded images
	I0817 03:17:25.136229 1757367 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 03:17:25.136250 1757367 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0817 03:17:25.136361 1757367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/config.json ...
	I0817 03:17:25.136519 1757367 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 03:17:25.179296 1757367 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 03:17:25.179315 1757367 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 03:17:25.179327 1757367 cache.go:205] Successfully downloaded all kic artifacts
	I0817 03:17:25.179364 1757367 start.go:313] acquiring machines lock for newest-cni-20210817031538-1554185: {Name:mkfbad738c3621399011c572f2cc8ad1253002d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:17:25.179455 1757367 start.go:317] acquired machines lock for "newest-cni-20210817031538-1554185" in 63.056µs
	I0817 03:17:25.179482 1757367 start.go:93] Skipping create...Using existing machine configuration
	I0817 03:17:25.179492 1757367 fix.go:55] fixHost starting: 
	I0817 03:17:25.179764 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:17:25.209174 1757367 fix.go:108] recreateIfNeeded on newest-cni-20210817031538-1554185: state=Stopped err=<nil>
	W0817 03:17:25.209200 1757367 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 03:17:25.211374 1757367 out.go:177] * Restarting existing docker container for "newest-cni-20210817031538-1554185" ...
	I0817 03:17:25.211431 1757367 cli_runner.go:115] Run: docker start newest-cni-20210817031538-1554185
	I0817 03:17:25.607494 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:17:25.642942 1757367 kic.go:420] container "newest-cni-20210817031538-1554185" state is running.
	I0817 03:17:25.643309 1757367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210817031538-1554185
	I0817 03:17:25.686073 1757367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/config.json ...
	I0817 03:17:25.686234 1757367 machine.go:88] provisioning docker machine ...
	I0817 03:17:25.686254 1757367 ubuntu.go:169] provisioning hostname "newest-cni-20210817031538-1554185"
	I0817 03:17:25.686299 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:25.724704 1757367 main.go:130] libmachine: Using SSH client type: native
	I0817 03:17:25.724873 1757367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50503 <nil> <nil>}
	I0817 03:17:25.724887 1757367 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210817031538-1554185 && echo "newest-cni-20210817031538-1554185" | sudo tee /etc/hostname
	I0817 03:17:25.725451 1757367 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0817 03:17:28.850133 1757367 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210817031538-1554185
	
	I0817 03:17:28.850199 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:28.890023 1757367 main.go:130] libmachine: Using SSH client type: native
	I0817 03:17:28.890192 1757367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50503 <nil> <nil>}
	I0817 03:17:28.890221 1757367 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210817031538-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210817031538-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210817031538-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 03:17:29.010228 1757367 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 03:17:29.010254 1757367 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 03:17:29.010289 1757367 ubuntu.go:177] setting up certificates
	I0817 03:17:29.010298 1757367 provision.go:83] configureAuth start
	I0817 03:17:29.010351 1757367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210817031538-1554185
	I0817 03:17:29.043727 1757367 provision.go:138] copyHostCerts
	I0817 03:17:29.043785 1757367 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 03:17:29.043798 1757367 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 03:17:29.043853 1757367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 03:17:29.043927 1757367 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 03:17:29.043939 1757367 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 03:17:29.043964 1757367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 03:17:29.044011 1757367 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 03:17:29.044021 1757367 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 03:17:29.044041 1757367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 03:17:29.044112 1757367 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210817031538-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210817031538-1554185]
	I0817 03:17:29.308896 1757367 provision.go:172] copyRemoteCerts
	I0817 03:17:29.308953 1757367 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 03:17:29.308997 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:29.338849 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:17:29.420648 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 03:17:29.435629 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0817 03:17:29.450565 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 03:17:29.464932 1757367 provision.go:86] duration metric: configureAuth took 454.623957ms
	I0817 03:17:29.464952 1757367 ubuntu.go:193] setting minikube options for container-runtime
	I0817 03:17:29.465116 1757367 config.go:177] Loaded profile config "newest-cni-20210817031538-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:17:29.465125 1757367 machine.go:91] provisioned docker machine in 3.778885025s
	I0817 03:17:29.465132 1757367 start.go:267] post-start starting for "newest-cni-20210817031538-1554185" (driver="docker")
	I0817 03:17:29.465139 1757367 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 03:17:29.465182 1757367 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 03:17:29.465216 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:29.495469 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:17:29.576507 1757367 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 03:17:29.578909 1757367 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 03:17:29.578939 1757367 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 03:17:29.578951 1757367 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 03:17:29.578960 1757367 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 03:17:29.578970 1757367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 03:17:29.579014 1757367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 03:17:29.579103 1757367 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 03:17:29.579198 1757367 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 03:17:29.584741 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:17:29.599075 1757367 start.go:270] post-start completed in 133.932037ms
	I0817 03:17:29.599117 1757367 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 03:17:29.599153 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:29.629395 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:17:29.711161 1757367 fix.go:57] fixHost completed within 4.531666977s
	I0817 03:17:29.711177 1757367 start.go:80] releasing machines lock for "newest-cni-20210817031538-1554185", held for 4.531709684s
	I0817 03:17:29.711241 1757367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210817031538-1554185
	I0817 03:17:29.761806 1757367 ssh_runner.go:149] Run: systemctl --version
	I0817 03:17:29.761851 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:29.761879 1757367 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 03:17:29.761932 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:29.833366 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:17:29.849203 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:17:30.100448 1757367 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 03:17:30.112101 1757367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 03:17:30.120206 1757367 docker.go:153] disabling docker service ...
	I0817 03:17:30.120242 1757367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 03:17:30.128879 1757367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 03:17:30.136660 1757367 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 03:17:30.205674 1757367 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 03:17:30.282520 1757367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 03:17:30.290427 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 03:17:30.300976 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 03:17:30.311923 1757367 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 03:17:30.317176 1757367 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 03:17:30.322396 1757367 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 03:17:30.394562 1757367 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 03:17:30.523003 1757367 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 03:17:30.523101 1757367 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 03:17:30.526569 1757367 start.go:413] Will wait 60s for crictl version
	I0817 03:17:30.526647 1757367 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:17:30.555522 1757367 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T03:17:30Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 03:17:41.602309 1757367 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:17:41.626249 1757367 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 03:17:41.626295 1757367 ssh_runner.go:149] Run: containerd --version
	I0817 03:17:41.647551 1757367 ssh_runner.go:149] Run: containerd --version
	I0817 03:17:41.669710 1757367 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0817 03:17:41.669775 1757367 cli_runner.go:115] Run: docker network inspect newest-cni-20210817031538-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 03:17:41.698711 1757367 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 03:17:41.701518 1757367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:17:41.711319 1757367 out.go:177]   - kubelet.network-plugin=cni
	I0817 03:17:41.713014 1757367 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0817 03:17:41.713068 1757367 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0817 03:17:41.713126 1757367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:17:41.738639 1757367 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:17:41.738653 1757367 containerd.go:517] Images already preloaded, skipping extraction
	I0817 03:17:41.738687 1757367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:17:41.764883 1757367 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:17:41.764898 1757367 cache_images.go:74] Images are preloaded, skipping loading
	I0817 03:17:41.764942 1757367 ssh_runner.go:149] Run: sudo crictl info
	I0817 03:17:41.797816 1757367 cni.go:93] Creating CNI manager for ""
	I0817 03:17:41.797838 1757367 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:17:41.797863 1757367 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0817 03:17:41.797884 1757367 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210817031538-1554185 NodeName:newest-cni-20210817031538-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true l
eader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 03:17:41.798049 1757367 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20210817031538-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 03:17:41.798148 1757367 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210817031538-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817031538-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 03:17:41.798209 1757367 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0817 03:17:41.804886 1757367 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 03:17:41.804933 1757367 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 03:17:41.812508 1757367 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (620 bytes)
	I0817 03:17:41.824471 1757367 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 03:17:41.839852 1757367 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0817 03:17:41.851112 1757367 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 03:17:41.853946 1757367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:17:41.863529 1757367 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185 for IP: 192.168.49.2
	I0817 03:17:41.863572 1757367 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 03:17:41.863591 1757367 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 03:17:41.863640 1757367 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/client.key
	I0817 03:17:41.863658 1757367 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/apiserver.key.dd3b5fb2
	I0817 03:17:41.863680 1757367 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/proxy-client.key
	I0817 03:17:41.863776 1757367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 03:17:41.863814 1757367 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 03:17:41.863828 1757367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 03:17:41.863852 1757367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 03:17:41.863883 1757367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 03:17:41.863908 1757367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 03:17:41.863953 1757367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:17:41.865051 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 03:17:41.889285 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 03:17:41.910628 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 03:17:41.929484 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 03:17:41.954093 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 03:17:41.978403 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 03:17:41.998169 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 03:17:42.013452 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 03:17:42.028753 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 03:17:42.043330 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 03:17:42.059012 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 03:17:42.073148 1757367 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 03:17:42.083617 1757367 ssh_runner.go:149] Run: openssl version
	I0817 03:17:42.088941 1757367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 03:17:42.095676 1757367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 03:17:42.098315 1757367 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 03:17:42.098371 1757367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 03:17:42.102403 1757367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 03:17:42.108391 1757367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 03:17:42.114349 1757367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 03:17:42.116938 1757367 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 03:17:42.116976 1757367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 03:17:42.121053 1757367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 03:17:42.126699 1757367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 03:17:42.132744 1757367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:17:42.135272 1757367 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:17:42.135310 1757367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:17:42.139756 1757367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 03:17:42.145496 1757367 kubeadm.go:390] StartCluster: {Name:newest-cni-20210817031538-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817031538-1554185 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map
[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:17:42.145589 1757367 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 03:17:42.145644 1757367 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:17:42.171369 1757367 cri.go:76] found id: "cc900511df519a93f555f3416a221d935c394528a2061fe6e527c0d2a927e9b2"
	I0817 03:17:42.171392 1757367 cri.go:76] found id: "c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a"
	I0817 03:17:42.171398 1757367 cri.go:76] found id: "ef9c6f6e4c7fca951fc6ee6b33eeda5a8114bcd5b931915f7e4c7bafee6857da"
	I0817 03:17:42.171419 1757367 cri.go:76] found id: "0095f08395273c8984cca9e310f7ff709d93e7a80eea7ff384fe93ff0ca3f587"
	I0817 03:17:42.171423 1757367 cri.go:76] found id: "21668b68e79b6a717d938adec524630fe75caae401a2caf1d1d8a4ea4a361158"
	I0817 03:17:42.171427 1757367 cri.go:76] found id: "a2bfe6f66f2f948ce0f8370ecadfe2dff535f02ce23800223a2bf18c52238e58"
	I0817 03:17:42.171437 1757367 cri.go:76] found id: ""
	I0817 03:17:42.171468 1757367 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:17:42.185254 1757367 cri.go:103] JSON = null
	W0817 03:17:42.185303 1757367 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0817 03:17:42.185349 1757367 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 03:17:42.192145 1757367 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 03:17:42.192162 1757367 kubeadm.go:600] restartCluster start
	I0817 03:17:42.192197 1757367 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 03:17:42.197552 1757367 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:42.198421 1757367 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210817031538-1554185" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:17:42.198651 1757367 kubeconfig.go:128] "newest-cni-20210817031538-1554185" context is missing from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0817 03:17:42.199144 1757367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:17:42.201287 1757367 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 03:17:42.207462 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:42.207519 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:42.216278 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:42.416610 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:42.416653 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:42.425139 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:42.616343 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:42.616415 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:42.624937 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:42.817032 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:42.817131 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:42.827111 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:43.017392 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:43.017468 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:43.027083 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:43.217326 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:43.217370 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:43.226041 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:43.417248 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:43.417322 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:43.425848 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:43.617094 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:43.617172 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:43.625990 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:43.817251 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:43.817312 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:43.827497 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:44.016845 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:44.016914 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:44.026276 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:44.216401 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:44.216476 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:44.225079 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:44.417330 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:44.417372 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:44.425977 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:44.617170 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:44.617244 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:44.625726 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:44.817265 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:44.817349 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:44.827569 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:45.016877 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:45.016941 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:45.026740 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:45.216999 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:45.217043 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:45.225669 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:45.225680 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:45.225713 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:45.234106 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:45.234151 1757367 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0817 03:17:45.234169 1757367 kubeadm.go:1032] stopping kube-system containers ...
	I0817 03:17:45.234178 1757367 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 03:17:45.234225 1757367 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:17:45.257425 1757367 cri.go:76] found id: "cc900511df519a93f555f3416a221d935c394528a2061fe6e527c0d2a927e9b2"
	I0817 03:17:45.257442 1757367 cri.go:76] found id: "c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a"
	I0817 03:17:45.257447 1757367 cri.go:76] found id: "ef9c6f6e4c7fca951fc6ee6b33eeda5a8114bcd5b931915f7e4c7bafee6857da"
	I0817 03:17:45.257451 1757367 cri.go:76] found id: "0095f08395273c8984cca9e310f7ff709d93e7a80eea7ff384fe93ff0ca3f587"
	I0817 03:17:45.257475 1757367 cri.go:76] found id: "21668b68e79b6a717d938adec524630fe75caae401a2caf1d1d8a4ea4a361158"
	I0817 03:17:45.257486 1757367 cri.go:76] found id: "a2bfe6f66f2f948ce0f8370ecadfe2dff535f02ce23800223a2bf18c52238e58"
	I0817 03:17:45.257490 1757367 cri.go:76] found id: ""
	I0817 03:17:45.257495 1757367 cri.go:221] Stopping containers: [cc900511df519a93f555f3416a221d935c394528a2061fe6e527c0d2a927e9b2 c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a ef9c6f6e4c7fca951fc6ee6b33eeda5a8114bcd5b931915f7e4c7bafee6857da 0095f08395273c8984cca9e310f7ff709d93e7a80eea7ff384fe93ff0ca3f587 21668b68e79b6a717d938adec524630fe75caae401a2caf1d1d8a4ea4a361158 a2bfe6f66f2f948ce0f8370ecadfe2dff535f02ce23800223a2bf18c52238e58]
	I0817 03:17:45.257531 1757367 ssh_runner.go:149] Run: which crictl
	I0817 03:17:45.259939 1757367 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop cc900511df519a93f555f3416a221d935c394528a2061fe6e527c0d2a927e9b2 c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a ef9c6f6e4c7fca951fc6ee6b33eeda5a8114bcd5b931915f7e4c7bafee6857da 0095f08395273c8984cca9e310f7ff709d93e7a80eea7ff384fe93ff0ca3f587 21668b68e79b6a717d938adec524630fe75caae401a2caf1d1d8a4ea4a361158 a2bfe6f66f2f948ce0f8370ecadfe2dff535f02ce23800223a2bf18c52238e58
	I0817 03:17:45.282084 1757367 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 03:17:45.290821 1757367 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 03:17:45.296642 1757367 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 17 03:16 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 17 03:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2071 Aug 17 03:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 17 03:16 /etc/kubernetes/scheduler.conf
	
	I0817 03:17:45.296684 1757367 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0817 03:17:45.302445 1757367 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0817 03:17:45.307967 1757367 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0817 03:17:45.313342 1757367 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:45.313406 1757367 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 03:17:45.318637 1757367 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0817 03:17:45.324252 1757367 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:45.324303 1757367 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 03:17:45.329495 1757367 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 03:17:45.335010 1757367 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 03:17:45.335029 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:17:45.395171 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:17:47.603212 1757367 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.207984373s)
	I0817 03:17:47.603242 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:17:47.752604 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:17:47.879104 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:17:48.003375 1757367 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:17:48.003426 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:48.513918 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:49.014079 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:49.513492 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:50.014314 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:50.514436 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:51.014230 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:51.513482 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:52.013528 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:52.513701 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:53.014116 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:53.513466 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:54.013795 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:54.513565 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:54.528870 1757367 api_server.go:70] duration metric: took 6.525497605s to wait for apiserver process to appear ...
	I0817 03:17:54.528886 1757367 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:17:54.528894 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:17:59.532693 1757367 api_server.go:255] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 03:18:00.033389 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:18:01.005646 1757367 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 03:18:01.005664 1757367 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 03:18:01.033720 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:18:01.102602 1757367 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 03:18:01.102619 1757367 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 03:18:01.532841 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:18:01.540973 1757367 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:18:01.541018 1757367 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:18:02.033196 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:18:02.041346 1757367 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:18:02.041372 1757367 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:18:02.532836 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:18:02.541511 1757367 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:18:02.555121 1757367 api_server.go:139] control plane version: v1.22.0-rc.0
	I0817 03:18:02.555142 1757367 api_server.go:129] duration metric: took 8.02625102s to wait for apiserver health ...
	I0817 03:18:02.555151 1757367 cni.go:93] Creating CNI manager for ""
	I0817 03:18:02.555158 1757367 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:18:02.557265 1757367 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 03:18:02.557321 1757367 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 03:18:02.561371 1757367 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0817 03:18:02.561384 1757367 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 03:18:02.574495 1757367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 03:18:02.809600 1757367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:18:02.835213 1757367 system_pods.go:59] 9 kube-system pods found
	I0817 03:18:02.835247 1757367 system_pods.go:61] "coredns-78fcd69978-x8zkx" [95714572-ca09-4e17-981a-934153d9c863] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0817 03:18:02.835262 1757367 system_pods.go:61] "etcd-newest-cni-20210817031538-1554185" [97b912ad-8a27-4b97-b545-b16a9c788a47] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 03:18:02.835282 1757367 system_pods.go:61] "kindnet-w8m9q" [caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 03:18:02.835295 1757367 system_pods.go:61] "kube-apiserver-newest-cni-20210817031538-1554185" [92524635-9c14-45bd-8b34-fba35775cc9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 03:18:02.835352 1757367 system_pods.go:61] "kube-controller-manager-newest-cni-20210817031538-1554185" [79f2907d-b3cd-49a7-9d38-194068fe6579] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 03:18:02.835365 1757367 system_pods.go:61] "kube-proxy-clj8s" [929df3e0-ea05-4a55-b16d-ac959dbf86a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 03:18:02.835378 1757367 system_pods.go:61] "kube-scheduler-newest-cni-20210817031538-1554185" [5c5510bd-a46c-4369-b82d-8774f7d679d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 03:18:02.835390 1757367 system_pods.go:61] "metrics-server-7c784ccb57-kfrc2" [71464a64-042a-490f-ac6a-1e85150897c3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0817 03:18:02.835401 1757367 system_pods.go:61] "storage-provisioner" [d9c8ddfd-73dd-4bc3-9638-50cf0b37a760] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0817 03:18:02.835411 1757367 system_pods.go:74] duration metric: took 25.789786ms to wait for pod list to return data ...
	I0817 03:18:02.835421 1757367 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:18:02.841456 1757367 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:18:02.841482 1757367 node_conditions.go:123] node cpu capacity is 2
	I0817 03:18:02.841494 1757367 node_conditions.go:105] duration metric: took 6.067073ms to run NodePressure ...
	I0817 03:18:02.841507 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:18:03.123328 1757367 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 03:18:03.132589 1757367 ops.go:34] apiserver oom_adj: -16
	I0817 03:18:03.132628 1757367 kubeadm.go:604] restartCluster took 20.940459247s
	I0817 03:18:03.132646 1757367 kubeadm.go:392] StartCluster complete in 20.987153807s
	I0817 03:18:03.132671 1757367 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:18:03.132755 1757367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:18:03.133693 1757367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:18:03.137919 1757367 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210817031538-1554185" rescaled to 1
	I0817 03:18:03.137970 1757367 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0817 03:18:03.139927 1757367 out.go:177] * Verifying Kubernetes components...
	I0817 03:18:03.139993 1757367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:18:03.138221 1757367 config.go:177] Loaded profile config "newest-cni-20210817031538-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:18:03.138237 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 03:18:03.138247 1757367 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0817 03:18:03.140140 1757367 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210817031538-1554185"
	I0817 03:18:03.140153 1757367 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210817031538-1554185"
	W0817 03:18:03.140159 1757367 addons.go:147] addon storage-provisioner should already be in state true
	I0817 03:18:03.140159 1757367 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210817031538-1554185"
	I0817 03:18:03.140174 1757367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210817031538-1554185"
	I0817 03:18:03.140180 1757367 host.go:66] Checking if "newest-cni-20210817031538-1554185" exists ...
	I0817 03:18:03.140450 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:18:03.140649 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:18:03.140710 1757367 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210817031538-1554185"
	I0817 03:18:03.140719 1757367 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210817031538-1554185"
	W0817 03:18:03.140724 1757367 addons.go:147] addon metrics-server should already be in state true
	I0817 03:18:03.140740 1757367 host.go:66] Checking if "newest-cni-20210817031538-1554185" exists ...
	I0817 03:18:03.141137 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:18:03.141300 1757367 addons.go:59] Setting dashboard=true in profile "newest-cni-20210817031538-1554185"
	I0817 03:18:03.141311 1757367 addons.go:135] Setting addon dashboard=true in "newest-cni-20210817031538-1554185"
	W0817 03:18:03.141316 1757367 addons.go:147] addon dashboard should already be in state true
	I0817 03:18:03.141332 1757367 host.go:66] Checking if "newest-cni-20210817031538-1554185" exists ...
	I0817 03:18:03.141725 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:18:03.224681 1757367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 03:18:03.224844 1757367 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:18:03.224856 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 03:18:03.224903 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:18:03.291242 1757367 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0817 03:18:03.293968 1757367 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0817 03:18:03.294014 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0817 03:18:03.294022 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0817 03:18:03.294067 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:18:03.341091 1757367 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0817 03:18:03.341143 1757367 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 03:18:03.341152 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 03:18:03.341198 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:18:03.337861 1757367 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210817031538-1554185"
	W0817 03:18:03.341369 1757367 addons.go:147] addon default-storageclass should already be in state true
	I0817 03:18:03.341394 1757367 host.go:66] Checking if "newest-cni-20210817031538-1554185" exists ...
	I0817 03:18:03.341836 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:18:03.396354 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:18:03.418272 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:18:03.448512 1757367 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:18:03.448568 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:18:03.448704 1757367 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 03:18:03.467104 1757367 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 03:18:03.467118 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 03:18:03.467166 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:18:03.499885 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:18:03.530405 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:18:03.534743 1757367 api_server.go:70] duration metric: took 396.74695ms to wait for apiserver process to appear ...
	I0817 03:18:03.534759 1757367 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:18:03.534767 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:18:03.543922 1757367 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:18:03.544773 1757367 api_server.go:139] control plane version: v1.22.0-rc.0
	I0817 03:18:03.544824 1757367 api_server.go:129] duration metric: took 10.059861ms to wait for apiserver health ...
	I0817 03:18:03.544845 1757367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:18:03.550790 1757367 system_pods.go:59] 9 kube-system pods found
	I0817 03:18:03.550898 1757367 system_pods.go:61] "coredns-78fcd69978-x8zkx" [95714572-ca09-4e17-981a-934153d9c863] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0817 03:18:03.550922 1757367 system_pods.go:61] "etcd-newest-cni-20210817031538-1554185" [97b912ad-8a27-4b97-b545-b16a9c788a47] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 03:18:03.550941 1757367 system_pods.go:61] "kindnet-w8m9q" [caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 03:18:03.550981 1757367 system_pods.go:61] "kube-apiserver-newest-cni-20210817031538-1554185" [92524635-9c14-45bd-8b34-fba35775cc9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 03:18:03.551025 1757367 system_pods.go:61] "kube-controller-manager-newest-cni-20210817031538-1554185" [79f2907d-b3cd-49a7-9d38-194068fe6579] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 03:18:03.551067 1757367 system_pods.go:61] "kube-proxy-clj8s" [929df3e0-ea05-4a55-b16d-ac959dbf86a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 03:18:03.551089 1757367 system_pods.go:61] "kube-scheduler-newest-cni-20210817031538-1554185" [5c5510bd-a46c-4369-b82d-8774f7d679d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 03:18:03.551118 1757367 system_pods.go:61] "metrics-server-7c784ccb57-kfrc2" [71464a64-042a-490f-ac6a-1e85150897c3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0817 03:18:03.551139 1757367 system_pods.go:61] "storage-provisioner" [d9c8ddfd-73dd-4bc3-9638-50cf0b37a760] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0817 03:18:03.551156 1757367 system_pods.go:74] duration metric: took 6.295345ms to wait for pod list to return data ...
	I0817 03:18:03.551173 1757367 default_sa.go:34] waiting for default service account to be created ...
	I0817 03:18:03.561337 1757367 default_sa.go:45] found service account: "default"
	I0817 03:18:03.561355 1757367 default_sa.go:55] duration metric: took 10.154621ms for default service account to be created ...
	I0817 03:18:03.561363 1757367 kubeadm.go:547] duration metric: took 423.37027ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0817 03:18:03.561380 1757367 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:18:03.565050 1757367 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:18:03.565069 1757367 node_conditions.go:123] node cpu capacity is 2
	I0817 03:18:03.565080 1757367 node_conditions.go:105] duration metric: took 3.692443ms to run NodePressure ...
	I0817 03:18:03.565092 1757367 start.go:231] waiting for startup goroutines ...
	I0817 03:18:03.662677 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0817 03:18:03.662693 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0817 03:18:03.676750 1757367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:18:03.727394 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0817 03:18:03.727410 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0817 03:18:03.734433 1757367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 03:18:03.741690 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0817 03:18:03.741730 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0817 03:18:03.777425 1757367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 03:18:03.777470 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0817 03:18:03.786772 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0817 03:18:03.786852 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0817 03:18:03.801896 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0817 03:18:03.801939 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0817 03:18:03.819288 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0817 03:18:03.819322 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0817 03:18:03.837158 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0817 03:18:03.837191 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0817 03:18:03.850394 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0817 03:18:03.850427 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0817 03:18:03.857456 1757367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 03:18:03.857497 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 03:18:03.879650 1757367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 03:18:03.879696 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 03:18:03.896680 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 03:18:03.896716 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0817 03:18:03.924418 1757367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 03:18:03.964779 1757367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 03:18:04.300443 1757367 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210817031538-1554185"
	I0817 03:18:04.373100 1757367 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0817 03:18:04.373164 1757367 addons.go:344] enableAddons completed in 1.23491856s
	I0817 03:18:04.436515 1757367 start.go:462] kubectl: 1.21.3, cluster: 1.22.0-rc.0 (minor skew: 1)
	I0817 03:18:04.438325 1757367 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210817031538-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	3480b6e9d1801       f37b7c809e5dc       5 seconds ago        Running             kindnet-cni               1                   fe7215298d43c
	ab86eee981b61       5f7fafb97c956       5 seconds ago        Running             kube-proxy                1                   e81492055fbc1
	482d4f309ae03       6fe8178781397       14 seconds ago       Running             kube-apiserver            1                   c0a8d7a4730f4
	6be36ee527254       2252d5eb703b0       14 seconds ago       Running             etcd                      1                   e285d26d47a9f
	ac3b3907e6fe2       41065afd0ca8b       14 seconds ago       Running             kube-controller-manager   1                   ac78a36df6a81
	4d71197216232       82ecd1e357878       14 seconds ago       Running             kube-scheduler            1                   085c3ba8f1d3f
	cc900511df519       f37b7c809e5dc       About a minute ago   Exited              kindnet-cni               0                   363ba39c89949
	c06aefb58ab88       5f7fafb97c956       About a minute ago   Exited              kube-proxy                0                   23a3da90f03dd
	ef9c6f6e4c7fc       2252d5eb703b0       About a minute ago   Exited              etcd                      0                   bf152b7155443
	0095f08395273       41065afd0ca8b       About a minute ago   Exited              kube-controller-manager   0                   ce251a441c8cf
	21668b68e79b6       6fe8178781397       About a minute ago   Exited              kube-apiserver            0                   c96eadb0c989c
	a2bfe6f66f2f9       82ecd1e357878       About a minute ago   Exited              kube-scheduler            0                   84b0f300d329a
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 03:17:25 UTC, end at Tue 2021-08-17 03:18:08 UTC. --
	Aug 17 03:17:54 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:17:54.175830290Z" level=info msg="StartContainer for \"482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9\" returns successfully"
	Aug 17 03:17:54 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:17:54.181469207Z" level=info msg="StartContainer for \"4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727\" returns successfully"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:01.053474115Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.577027954Z" level=info msg="StopPodSandbox for \"23a3da90f03dd5806a31740f8a18e34a8b2ed7a60fe2a5513d178ffb03d3a481\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.577101053Z" level=info msg="Container to stop \"c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.577192326Z" level=info msg="TearDown network for sandbox \"23a3da90f03dd5806a31740f8a18e34a8b2ed7a60fe2a5513d178ffb03d3a481\" successfully"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.577218885Z" level=info msg="StopPodSandbox for \"23a3da90f03dd5806a31740f8a18e34a8b2ed7a60fe2a5513d178ffb03d3a481\" returns successfully"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.577915182Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-clj8s,Uid:929df3e0-ea05-4a55-b16d-ac959dbf86a7,Namespace:kube-system,Attempt:1,}"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.585035192Z" level=info msg="StopPodSandbox for \"363ba39c89949d241f8c07e21761fdeb9ae46ae9b26a1a7cccd5f3c1530f47fe\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.585107035Z" level=info msg="Container to stop \"cc900511df519a93f555f3416a221d935c394528a2061fe6e527c0d2a927e9b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.585183433Z" level=info msg="TearDown network for sandbox \"363ba39c89949d241f8c07e21761fdeb9ae46ae9b26a1a7cccd5f3c1530f47fe\" successfully"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.585194181Z" level=info msg="StopPodSandbox for \"363ba39c89949d241f8c07e21761fdeb9ae46ae9b26a1a7cccd5f3c1530f47fe\" returns successfully"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.585943769Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-w8m9q,Uid:caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf,Namespace:kube-system,Attempt:1,}"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.607911318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075 pid=1129
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.625750251Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217 pid=1151
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.707083116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clj8s,Uid:929df3e0-ea05-4a55-b16d-ac959dbf86a7,Namespace:kube-system,Attempt:1,} returns sandbox id \"e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.709470587Z" level=info msg="CreateContainer within sandbox \"e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.728965627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-w8m9q,Uid:caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf,Namespace:kube-system,Attempt:1,} returns sandbox id \"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.732686320Z" level=info msg="CreateContainer within sandbox \"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.766499509Z" level=info msg="CreateContainer within sandbox \"e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.766974933Z" level=info msg="StartContainer for \"ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.787367187Z" level=info msg="CreateContainer within sandbox \"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.790196779Z" level=info msg="StartContainer for \"3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.945780700Z" level=info msg="StartContainer for \"ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d\" returns successfully"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.946322192Z" level=info msg="StartContainer for \"3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> etcd [6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae] <==
	* 2021-08-17 03:17:54.015168 I | embed: initial advertise peer URLs = https://192.168.49.2:2380
	2021-08-17 03:17:54.015173 I | embed: initial cluster = 
	2021-08-17 03:17:54.149977 I | etcdserver: restarting member aec36adc501070cc in cluster fa54960ea34d58be at commit index 532
	raft2021/08/17 03:17:54 INFO: aec36adc501070cc switched to configuration voters=()
	raft2021/08/17 03:17:54 INFO: aec36adc501070cc became follower at term 2
	raft2021/08/17 03:17:54 INFO: newRaft aec36adc501070cc [peers: [], term: 2, commit: 532, applied: 0, lastindex: 532, lastterm: 2]
	2021-08-17 03:17:54.154239 W | auth: simple token is not cryptographically signed
	2021-08-17 03:17:54.210159 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	raft2021/08/17 03:17:54 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 03:17:54.213626 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-17 03:17:54.214199 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 03:17:54.214400 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 03:17:54.221676 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 03:17:54.221805 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-17 03:17:54.221881 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/17 03:17:55 INFO: aec36adc501070cc is starting a new election at term 2
	raft2021/08/17 03:17:55 INFO: aec36adc501070cc became candidate at term 3
	raft2021/08/17 03:17:55 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3
	raft2021/08/17 03:17:55 INFO: aec36adc501070cc became leader at term 3
	raft2021/08/17 03:17:55 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3
	2021-08-17 03:17:55.170718 I | etcdserver: published {Name:newest-cni-20210817031538-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 03:17:55.170884 I | embed: ready to serve client requests
	2021-08-17 03:17:55.172219 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 03:17:55.193186 I | embed: ready to serve client requests
	2021-08-17 03:17:55.212171 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> etcd [ef9c6f6e4c7fca951fc6ee6b33eeda5a8114bcd5b931915f7e4c7bafee6857da] <==
	* raft2021/08/17 03:16:39 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2021/08/17 03:16:39 INFO: aec36adc501070cc became follower at term 1
	raft2021/08/17 03:16:39 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 03:16:39.949180 W | auth: simple token is not cryptographically signed
	2021-08-17 03:16:39.976978 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-17 03:16:39.986520 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/17 03:16:39 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 03:16:39.987052 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-17 03:16:39.997824 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 03:16:39.997949 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-17 03:16:39.998016 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/17 03:16:40 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 03:16:40 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 03:16:40 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 03:16:40 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 03:16:40 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 03:16:40.556513 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 03:16:40.572655 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 03:16:40.572808 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 03:16:40.572896 I | etcdserver: published {Name:newest-cni-20210817031538-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 03:16:40.573079 I | embed: ready to serve client requests
	2021-08-17 03:16:40.574392 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 03:16:40.574552 I | embed: ready to serve client requests
	2021-08-17 03:16:40.575687 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 03:17:03.888313 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  03:18:18 up 11:00,  0 users,  load average: 3.55, 2.52, 1.97
	Linux newest-cni-20210817031538-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [21668b68e79b6a717d938adec524630fe75caae401a2caf1d1d8a4ea4a361158] <==
	* E0817 03:16:45.883273       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0817 03:16:46.007349       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0817 03:16:46.008194       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 03:16:46.019829       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 03:16:46.051349       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0817 03:16:46.051608       1 cache.go:39] Caches are synced for autoregister controller
	I0817 03:16:46.056217       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0817 03:16:46.070364       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0817 03:16:46.106482       1 controller.go:611] quota admission added evaluator for: namespaces
	I0817 03:16:46.806392       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 03:16:46.806498       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 03:16:46.818393       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0817 03:16:46.835304       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0817 03:16:46.835330       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 03:16:47.282206       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 03:16:47.327702       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 03:16:47.457219       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0817 03:16:47.458178       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 03:16:47.461771       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 03:16:48.013151       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 03:16:49.229763       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 03:16:49.266703       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 03:16:54.580523       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 03:17:01.423614       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0817 03:17:01.589189       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9] <==
	* I0817 03:18:00.887781       1 controller.go:83] Starting OpenAPI AggregationController
	I0817 03:18:00.887861       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0817 03:18:00.887934       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0817 03:18:01.143984       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0817 03:18:01.170181       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0817 03:18:01.170668       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0817 03:18:01.171581       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 03:18:01.172644       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 03:18:01.173090       1 cache.go:39] Caches are synced for autoregister controller
	I0817 03:18:01.186686       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0817 03:18:01.197589       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0817 03:18:01.245265       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 03:18:01.892509       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 03:18:01.983829       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 03:18:01.983863       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 03:18:02.798283       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	W0817 03:18:02.981763       1 handler_proxy.go:104] no RequestInfo found in the context
	E0817 03:18:02.981825       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 03:18:02.981837       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 03:18:02.986072       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 03:18:03.015915       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 03:18:03.109304       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 03:18:03.116144       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0817 03:18:04.246496       1 controller.go:611] quota admission added evaluator for: namespaces
	
	* 
	* ==> kube-controller-manager [0095f08395273c8984cca9e310f7ff709d93e7a80eea7ff384fe93ff0ca3f587] <==
	* I0817 03:17:00.905515       1 range_allocator.go:373] Set node newest-cni-20210817031538-1554185 PodCIDR to [192.168.0.0/24]
	I0817 03:17:00.905540       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-newest-cni-20210817031538-1554185" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 03:17:00.917852       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0817 03:17:00.926716       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0817 03:17:00.927144       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-newest-cni-20210817031538-1554185" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 03:17:00.957519       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0817 03:17:00.963921       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0817 03:17:00.971402       1 shared_informer.go:247] Caches are synced for attach detach 
	I0817 03:17:01.046318       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 03:17:01.067536       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 03:17:01.070853       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0817 03:17:01.430722       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-clj8s"
	I0817 03:17:01.434536       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w8m9q"
	I0817 03:17:01.546476       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 03:17:01.546499       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 03:17:01.555988       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 03:17:01.596460       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I0817 03:17:01.889433       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-x8zkx"
	I0817 03:17:01.910933       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-2mpcf"
	I0817 03:17:02.060976       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I0817 03:17:02.102734       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-2mpcf"
	I0817 03:17:03.976197       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0817 03:17:04.000929       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0817 03:17:04.021218       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0817 03:17:04.047572       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-kfrc2"
	
	* 
	* ==> kube-controller-manager [ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935] <==
	* I0817 03:18:05.021252       1 shared_informer.go:240] Waiting for caches to sync for PV protection
	I0817 03:18:05.025260       1 controllermanager.go:577] Started "ttl"
	I0817 03:18:05.025611       1 ttl_controller.go:121] Starting TTL controller
	I0817 03:18:05.025698       1 shared_informer.go:240] Waiting for caches to sync for TTL
	I0817 03:18:05.039322       1 controllermanager.go:577] Started "root-ca-cert-publisher"
	I0817 03:18:05.039544       1 publisher.go:107] Starting root CA certificate configmap publisher
	I0817 03:18:05.040040       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	W0817 03:18:05.048117       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0817 03:18:05.067904       1 controllermanager.go:577] Started "endpointslice"
	I0817 03:18:05.068214       1 endpointslice_controller.go:257] Starting endpoint slice controller
	I0817 03:18:05.071329       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
	I0817 03:18:05.075257       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
	I0817 03:18:05.075403       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0817 03:18:05.075554       1 dynamic_serving_content.go:129] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0817 03:18:05.076393       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
	I0817 03:18:05.076552       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0817 03:18:05.076669       1 dynamic_serving_content.go:129] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0817 03:18:05.077448       1 controllermanager.go:577] Started "csrsigning"
	I0817 03:18:05.077606       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0817 03:18:05.078890       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0817 03:18:05.077625       1 dynamic_serving_content.go:129] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0817 03:18:05.077650       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	I0817 03:18:05.077662       1 dynamic_serving_content.go:129] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0817 03:18:05.079568       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0817 03:18:05.083112       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-proxy [ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d] <==
	* I0817 03:18:03.010014       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 03:18:03.010056       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 03:18:03.010068       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0817 03:18:03.040069       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 03:18:03.040100       1 server_others.go:212] Using iptables Proxier.
	I0817 03:18:03.040110       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 03:18:03.040126       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 03:18:03.040416       1 server.go:649] Version: v1.22.0-rc.0
	I0817 03:18:03.047840       1 config.go:315] Starting service config controller
	I0817 03:18:03.047859       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 03:18:03.048042       1 config.go:224] Starting endpoint slice config controller
	I0817 03:18:03.048054       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0817 03:18:03.095631       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210817031538-1554185.169bf9bd9f89f2f4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03ee87ac2b37ee5, ext:108618382, loc:(*time.Location)(0x2698ec0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210817031538-1554185", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"",
Name:"newest-cni-20210817031538-1554185", UID:"newest-cni-20210817031538-1554185", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210817031538-1554185.169bf9bd9f89f2f4" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0817 03:18:03.148094       1 shared_informer.go:247] Caches are synced for service config 
	I0817 03:18:03.148211       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a] <==
	* I0817 03:17:02.301489       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 03:17:02.301536       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 03:17:02.301551       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0817 03:17:02.341341       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 03:17:02.341369       1 server_others.go:212] Using iptables Proxier.
	I0817 03:17:02.341380       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 03:17:02.341467       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 03:17:02.341810       1 server.go:649] Version: v1.22.0-rc.0
	I0817 03:17:02.343571       1 config.go:315] Starting service config controller
	I0817 03:17:02.343584       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 03:17:02.343612       1 config.go:224] Starting endpoint slice config controller
	I0817 03:17:02.343615       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0817 03:17:02.348376       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210817031538-1554185.169bf9af7d590edf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03ee86b9464bea4, ext:92557120, loc:(*time.Location)(0x2698ec0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210817031538-1554185", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"",
Name:"newest-cni-20210817031538-1554185", UID:"newest-cni-20210817031538-1554185", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210817031538-1554185.169bf9af7d590edf" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0817 03:17:02.444623       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 03:17:02.444633       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727] <==
	* W0817 03:17:54.224145       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0817 03:17:57.740895       1 serving.go:347] Generated self-signed cert in-memory
	W0817 03:18:01.056946       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0817 03:18:01.056978       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 03:18:01.056988       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 03:18:01.056994       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 03:18:01.137207       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0817 03:18:01.137306       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0817 03:18:01.137325       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 03:18:01.137343       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0817 03:18:01.149063       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 03:18:01.155715       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 03:18:01.155824       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 03:18:01.155900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 03:18:01.155971       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 03:18:01.156032       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0817 03:18:01.271018       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [a2bfe6f66f2f948ce0f8370ecadfe2dff535f02ce23800223a2bf18c52238e58] <==
	* I0817 03:16:46.004139       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0817 03:16:46.015048       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.031692       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.031853       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 03:16:46.032119       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.032183       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 03:16:46.032336       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0817 03:16:46.032402       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 03:16:46.032467       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 03:16:46.032517       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 03:16:46.032568       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 03:16:46.032895       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 03:16:46.036442       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 03:16:46.036611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.036775       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 03:16:46.036929       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 03:16:46.856811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.858998       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.920511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.931819       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 03:16:46.933785       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 03:16:47.077377       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 03:16:47.131082       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 03:16:47.297332       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 03:16:49.806361       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 03:17:25 UTC, end at Tue 2021-08-17 03:18:18 UTC. --
	Aug 17 03:18:00 newest-cni-20210817031538-1554185 kubelet[663]: E0817 03:18:00.748492     663 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210817031538-1554185\" not found"
	Aug 17 03:18:00 newest-cni-20210817031538-1554185 kubelet[663]: E0817 03:18:00.849025     663 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210817031538-1554185\" not found"
	Aug 17 03:18:00 newest-cni-20210817031538-1554185 kubelet[663]: E0817 03:18:00.952912     663 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210817031538-1554185\" not found"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.053081     663 kuberuntime_manager.go:1075] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.055685     663 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.200896     663 kubelet_node_status.go:109] "Node was previously registered" node="newest-cni-20210817031538-1554185"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.200988     663 kubelet_node_status.go:74] "Successfully registered node" node="newest-cni-20210817031538-1554185"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.948658     663 apiserver.go:52] "Watching apiserver"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.953731     663 topology_manager.go:200] "Topology Admit Handler"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.956868     663 topology_manager.go:200] "Topology Admit Handler"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.058958     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/929df3e0-ea05-4a55-b16d-ac959dbf86a7-kube-proxy\") pod \"kube-proxy-clj8s\" (UID: \"929df3e0-ea05-4a55-b16d-ac959dbf86a7\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059014     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/929df3e0-ea05-4a55-b16d-ac959dbf86a7-lib-modules\") pod \"kube-proxy-clj8s\" (UID: \"929df3e0-ea05-4a55-b16d-ac959dbf86a7\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059045     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktdkc\" (UniqueName: \"kubernetes.io/projected/caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf-kube-api-access-ktdkc\") pod \"kindnet-w8m9q\" (UID: \"caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059072     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grkvl\" (UniqueName: \"kubernetes.io/projected/929df3e0-ea05-4a55-b16d-ac959dbf86a7-kube-api-access-grkvl\") pod \"kube-proxy-clj8s\" (UID: \"929df3e0-ea05-4a55-b16d-ac959dbf86a7\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059094     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf-cni-cfg\") pod \"kindnet-w8m9q\" (UID: \"caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059118     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf-xtables-lock\") pod \"kindnet-w8m9q\" (UID: \"caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059139     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf-lib-modules\") pod \"kindnet-w8m9q\" (UID: \"caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059162     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/929df3e0-ea05-4a55-b16d-ac959dbf86a7-xtables-lock\") pod \"kube-proxy-clj8s\" (UID: \"929df3e0-ea05-4a55-b16d-ac959dbf86a7\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059173     663 reconciler.go:157] "Reconciler: start to sync state"
	Aug 17 03:18:03 newest-cni-20210817031538-1554185 kubelet[663]: E0817 03:18:03.098059     663 summary_sys_containers.go:47] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Aug 17 03:18:03 newest-cni-20210817031538-1554185 kubelet[663]: E0817 03:18:03.098131     663 helpers.go:673] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Aug 17 03:18:05 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:05.557833     663 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 17 03:18:05 newest-cni-20210817031538-1554185 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 17 03:18:05 newest-cni-20210817031538-1554185 systemd[1]: kubelet.service: Succeeded.
	Aug 17 03:18:05 newest-cni-20210817031538-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 03:18:18.150253 1760814 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect newest-cni-20210817031538-1554185
helpers_test.go:236: (dbg) docker inspect newest-cni-20210817031538-1554185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00337eb3ef8aaf02771a127a8c204215c2824a832f8a02c76503b37a775fbb45",
	        "Created": "2021-08-17T03:15:40.389083806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1757577,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T03:17:25.598212216Z",
	            "FinishedAt": "2021-08-17T03:17:24.280896174Z"
	        },
	        "Image": "sha256:760046b5046513d32c77afbe96fbd356b26bf3b6019adf03f44d43c481259127",
	        "ResolvConfPath": "/var/lib/docker/containers/00337eb3ef8aaf02771a127a8c204215c2824a832f8a02c76503b37a775fbb45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00337eb3ef8aaf02771a127a8c204215c2824a832f8a02c76503b37a775fbb45/hostname",
	        "HostsPath": "/var/lib/docker/containers/00337eb3ef8aaf02771a127a8c204215c2824a832f8a02c76503b37a775fbb45/hosts",
	        "LogPath": "/var/lib/docker/containers/00337eb3ef8aaf02771a127a8c204215c2824a832f8a02c76503b37a775fbb45/00337eb3ef8aaf02771a127a8c204215c2824a832f8a02c76503b37a775fbb45-json.log",
	        "Name": "/newest-cni-20210817031538-1554185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-20210817031538-1554185:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20210817031538-1554185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4d5fc6a736ab98ef92456d43ae6dc75e59b09f76be0acc4655fb84146a0109d9-init/diff:/var/lib/docker/overlay2/e74ab7b1cc19a0dada1c449eaa6ebd10cefe120554e35f1699b6e46928781f46/diff:/var/lib/docker/overlay2/a9911e57b97d49520688e77735490a2082753d8a71686d595fa247f87b53555f/diff:/var/lib/docker/overlay2/a8540f5935840e64012683638f94d8653639344abc4cd9dc615ca3d962492274/diff:/var/lib/docker/overlay2/0a5e3f51b7f188d911aa13f7a8a86e33e6e98391d3279381a89a34e6da8d1ed0/diff:/var/lib/docker/overlay2/87f189f298d6b367a0182a8e83d928df16bf9113e6860bdbbc1dda76371ef19f/diff:/var/lib/docker/overlay2/d0c4a3b50de5d73d28b54625516d2028d37d93b1ec16ee8f474140529a9d0a55/diff:/var/lib/docker/overlay2/6a812787ba9019dbc421e093581f04052c035c04b6fa7d906a0397a387f09c0c/diff:/var/lib/docker/overlay2/2279ae30ead8e9a3d48881308408a741a945bef5f88ac37b40e7a83682db427f/diff:/var/lib/docker/overlay2/440bd0fb409c77ba99c842b715d37bf817af845471348663335fa0bf5adf1cf7/diff:/var/lib/docker/overlay2/2b30f8
eed2802d65b155145159fadc4c4fba66f7cf97c37d1ade4e77da72cf1d/diff:/var/lib/docker/overlay2/c9b439971c9e43c3fb1cca68b353121420971959922f273ce05ff78f745ff2bd/diff:/var/lib/docker/overlay2/f54b5ed3aad42ddcc9fd841149d0e2c47f71ce128c09ad0d1925ba8e01ebaf1d/diff:/var/lib/docker/overlay2/8ce6dd67f85156b5fe64680829c6d014264aa68b3a40d81662a1f801a04efb9c/diff:/var/lib/docker/overlay2/2b94bd88aa582585f78cd4af6ac0183849abc18535768a693b0f03b524108b1f/diff:/var/lib/docker/overlay2/9adf88800b7ecbba7b52026ae9314b1a19d69bb882db984b4569c59c922cdd34/diff:/var/lib/docker/overlay2/4be8b096208c54bf6c195a77f8bd615706c0bec0ba537b4d5630e5f21bccdfe9/diff:/var/lib/docker/overlay2/40edad125972b2b193bd6dc1e01ae7ac953cec50990cfc7841847fab7723f13d/diff:/var/lib/docker/overlay2/56100b111a1db10b78b7506ed1e0bcdfda2ed9083c425623af488716d6946a23/diff:/var/lib/docker/overlay2/16a57e92a0880ea9fa67b34ed4be7fc9c7ab7e965569e9e56aab92737dce7b0e/diff:/var/lib/docker/overlay2/a310979be8359062a6c9ba2e82a68513f6a8a618e0735ccf1722dbb63aa4f55c/diff:/var/lib/d
ocker/overlay2/31bcb20d4bed51687f63e63298a6ebd381ed8778d4f511a7a5d219f27302155c/diff:/var/lib/docker/overlay2/7b40b337ee74675834b7c174d49113d86ae9ffda36208feac19ecafe27df4b40/diff:/var/lib/docker/overlay2/ccdf3fb78439f61fdfe2a2638eb46805d9b77ede7a3fea30353c637c603fd5f2/diff:/var/lib/docker/overlay2/2dee17455dd303d564dbd195642de7647ad0bafe6ef1f78d74e1852c7de04437/diff:/var/lib/docker/overlay2/966adb5eecebdb4b4bdfd5c65437f0139f409e5c9aef789a6846264154a35c0b/diff:/var/lib/docker/overlay2/b97f6fbdea073fddb4bfc973d74816923d7d325a9fecebfee88036718979e034/diff:/var/lib/docker/overlay2/c005a03609375cf7deb6b7be9c156f97bb91b1044c2f787635d63a0f939f2170/diff:/var/lib/docker/overlay2/6f3a43add8dfbc5ceba13c34f20b76c2aa3dfda99c670559dba63cbd79e37a5d/diff:/var/lib/docker/overlay2/73577e0b273d067405eaeeea0d698a9678f2c54df7628e57d6ed5620c9e4a12f/diff:/var/lib/docker/overlay2/db6970e09285edcaffbbd14caf82fbbae6157ac6d47205ec44a846114661f997/diff:/var/lib/docker/overlay2/fb4877c2fb3a2e4227133f00f28c332335352085577d3b4974c34d59770
39849/diff:/var/lib/docker/overlay2/300341371be3f65fc28abc4d692825461242a13462dbcf14856459743ef9bec6/diff:/var/lib/docker/overlay2/acbed929a7b3e0529bb33b9e9ece267ee1cb85440cbed0a62b89f334d05ef410/diff:/var/lib/docker/overlay2/c2262196eb3da57fe2f90bc7b62a02d1d4d4b7856b5e346684da4b4643709f95/diff:/var/lib/docker/overlay2/469ca2e4f62cd4e0b98d3ff5e24f2d60e66792a960884f5a13d5dd70acb96230/diff:/var/lib/docker/overlay2/8e3d77f1e870b518f1d8d5533e6a93fc366a4127779761e825c74b5304f667d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d5fc6a736ab98ef92456d43ae6dc75e59b09f76be0acc4655fb84146a0109d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d5fc6a736ab98ef92456d43ae6dc75e59b09f76be0acc4655fb84146a0109d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d5fc6a736ab98ef92456d43ae6dc75e59b09f76be0acc4655fb84146a0109d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20210817031538-1554185",
	                "Source": "/var/lib/docker/volumes/newest-cni-20210817031538-1554185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20210817031538-1554185",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20210817031538-1554185",
	                "name.minikube.sigs.k8s.io": "newest-cni-20210817031538-1554185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd4213be6abb1c5cd4a9ea453ed9a6707bf98bacc73bf0a3d06b9fb21dfcaa6e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50503"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50502"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50499"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50501"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50500"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cd4213be6abb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20210817031538-1554185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "00337eb3ef8a",
	                        "newest-cni-20210817031538-1554185"
	                    ],
	                    "NetworkID": "d76af0dbb8c0f4682496f6ba0caf4de1b85120cf92e723f44f1621b8c2b2362f",
	                    "EndpointID": "e6934544be6a7ed5427c5032011f456a7c842dac3ee5c5171b8500f2dc5abaa8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210817031538-1554185 -n newest-cni-20210817031538-1554185
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210817031538-1554185 -n newest-cni-20210817031538-1554185: exit status 2 (339.74084ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-20210817031538-1554185 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 -p newest-cni-20210817031538-1554185 logs -n 25: exit status 110 (10.918242798s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                   Profile                    |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|----------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:22 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                              |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:01:42 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                              |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:01:42 UTC | Tue, 17 Aug 2021 03:07:27 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                              |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                              |         |         |                               |                               |
	|         | --driver=docker                                            |                                              |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                              |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:38 UTC | Tue, 17 Aug 2021 03:07:38 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                              |         |         |                               |                               |
	| -p      | embed-certs-20210817025908-1554185                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:40 UTC | Tue, 17 Aug 2021 03:07:41 UTC |
	|         | logs -n 25                                                 |                                              |         |         |                               |                               |
	| -p      | embed-certs-20210817025908-1554185                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:42 UTC | Tue, 17 Aug 2021 03:07:43 UTC |
	|         | logs -n 25                                                 |                                              |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:44 UTC | Tue, 17 Aug 2021 03:07:47 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210817025908-1554185           | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:47 UTC | Tue, 17 Aug 2021 03:07:48 UTC |
	|         | embed-certs-20210817025908-1554185                         |                                              |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20210817030748-1554185 | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:48 UTC | Tue, 17 Aug 2021 03:07:48 UTC |
	|         | disable-driver-mounts-20210817030748-1554185               |                                              |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:07:48 UTC | Tue, 17 Aug 2021 03:09:14 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                              |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                              |         |         |                               |                               |
	|         | --driver=docker                                            |                                              |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                              |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:24 UTC | Tue, 17 Aug 2021 03:09:24 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                              |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                              |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:25 UTC | Tue, 17 Aug 2021 03:09:45 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                              |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:45 UTC | Tue, 17 Aug 2021 03:09:45 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                              |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:09:45 UTC | Tue, 17 Aug 2021 03:15:18 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                              |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                              |         |         |                               |                               |
	|         | --driver=docker                                            |                                              |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                              |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:28 UTC | Tue, 17 Aug 2021 03:15:29 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                              |         |         |                               |                               |
	| -p      | no-preload-20210817030748-1554185                          | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:31 UTC | Tue, 17 Aug 2021 03:15:32 UTC |
	|         | logs -n 25                                                 |                                              |         |         |                               |                               |
	| -p      | no-preload-20210817030748-1554185                          | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:33 UTC | Tue, 17 Aug 2021 03:15:34 UTC |
	|         | logs -n 25                                                 |                                              |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:35 UTC | Tue, 17 Aug 2021 03:15:38 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210817030748-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:38 UTC | Tue, 17 Aug 2021 03:15:38 UTC |
	|         | no-preload-20210817030748-1554185                          |                                              |         |         |                               |                               |
	| start   | -p newest-cni-20210817031538-1554185 --memory=2200         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:15:39 UTC | Tue, 17 Aug 2021 03:17:03 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                              |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                              |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                              |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                              |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                              |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:17:03 UTC | Tue, 17 Aug 2021 03:17:04 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                              |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                              |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:17:04 UTC | Tue, 17 Aug 2021 03:17:24 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                              |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:17:24 UTC | Tue, 17 Aug 2021 03:17:24 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                              |         |         |                               |                               |
	| start   | -p newest-cni-20210817031538-1554185 --memory=2200         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:17:24 UTC | Tue, 17 Aug 2021 03:18:04 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                              |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                              |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                              |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                              |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                              |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                              |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210817031538-1554185            | jenkins | v1.22.0 | Tue, 17 Aug 2021 03:18:04 UTC | Tue, 17 Aug 2021 03:18:05 UTC |
	|         | newest-cni-20210817031538-1554185                          |                                              |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                              |         |         |                               |                               |
	|---------|------------------------------------------------------------|----------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 03:17:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 03:17:24.818699 1757367 out.go:298] Setting OutFile to fd 1 ...
	I0817 03:17:24.818847 1757367 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:17:24.818875 1757367 out.go:311] Setting ErrFile to fd 2...
	I0817 03:17:24.818891 1757367 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:17:24.819041 1757367 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 03:17:24.819309 1757367 out.go:305] Setting JSON to false
	I0817 03:17:24.820560 1757367 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39583,"bootTime":1629130662,"procs":431,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 03:17:24.820655 1757367 start.go:121] virtualization:  
	I0817 03:17:24.823573 1757367 out.go:177] * [newest-cni-20210817031538-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 03:17:24.825366 1757367 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 03:17:24.823731 1757367 notify.go:169] Checking for updates...
	I0817 03:17:24.827450 1757367 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:17:24.829313 1757367 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 03:17:24.831460 1757367 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 03:17:24.831908 1757367 config.go:177] Loaded profile config "newest-cni-20210817031538-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:17:24.832380 1757367 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 03:17:24.902448 1757367 docker.go:132] docker version: linux-20.10.8
	I0817 03:17:24.902550 1757367 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:17:25.046236 1757367 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:17:24.961542715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 03:17:25.046338 1757367 docker.go:244] overlay module found
	I0817 03:17:25.048539 1757367 out.go:177] * Using the docker driver based on existing profile
	I0817 03:17:25.048564 1757367 start.go:278] selected driver: docker
	I0817 03:17:25.048570 1757367 start.go:751] validating driver "docker" against &{Name:newest-cni-20210817031538-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817031538-1554185 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain]
VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:17:25.048677 1757367 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 03:17:25.048717 1757367 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:17:25.048732 1757367 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:17:25.050546 1757367 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:17:25.050875 1757367 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:17:25.129226 1757367 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:17:25.078286511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	W0817 03:17:25.129350 1757367 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:17:25.129370 1757367 out.go:242] ! Your cgroup does not allow setting memory.
	I0817 03:17:25.132139 1757367 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:17:25.132231 1757367 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0817 03:17:25.132254 1757367 cni.go:93] Creating CNI manager for ""
	I0817 03:17:25.132261 1757367 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:17:25.132276 1757367 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 03:17:25.132283 1757367 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 03:17:25.132292 1757367 start_flags.go:277] config:
	{Name:newest-cni-20210817031538-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817031538-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:fal
se kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:17:25.134333 1757367 out.go:177] * Starting control plane node newest-cni-20210817031538-1554185 in cluster newest-cni-20210817031538-1554185
	I0817 03:17:25.134366 1757367 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 03:17:25.136042 1757367 out.go:177] * Pulling base image ...
	I0817 03:17:25.136063 1757367 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0817 03:17:25.136092 1757367 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4
	I0817 03:17:25.136108 1757367 cache.go:56] Caching tarball of preloaded images
	I0817 03:17:25.136229 1757367 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 03:17:25.136250 1757367 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0817 03:17:25.136361 1757367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/config.json ...
	I0817 03:17:25.136519 1757367 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 03:17:25.179296 1757367 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 03:17:25.179315 1757367 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 03:17:25.179327 1757367 cache.go:205] Successfully downloaded all kic artifacts
	I0817 03:17:25.179364 1757367 start.go:313] acquiring machines lock for newest-cni-20210817031538-1554185: {Name:mkfbad738c3621399011c572f2cc8ad1253002d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:17:25.179455 1757367 start.go:317] acquired machines lock for "newest-cni-20210817031538-1554185" in 63.056µs
	I0817 03:17:25.179482 1757367 start.go:93] Skipping create...Using existing machine configuration
	I0817 03:17:25.179492 1757367 fix.go:55] fixHost starting: 
	I0817 03:17:25.179764 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:17:25.209174 1757367 fix.go:108] recreateIfNeeded on newest-cni-20210817031538-1554185: state=Stopped err=<nil>
	W0817 03:17:25.209200 1757367 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 03:17:25.211374 1757367 out.go:177] * Restarting existing docker container for "newest-cni-20210817031538-1554185" ...
	I0817 03:17:25.211431 1757367 cli_runner.go:115] Run: docker start newest-cni-20210817031538-1554185
	I0817 03:17:25.607494 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:17:25.642942 1757367 kic.go:420] container "newest-cni-20210817031538-1554185" state is running.
	I0817 03:17:25.643309 1757367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210817031538-1554185
	I0817 03:17:25.686073 1757367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/config.json ...
	I0817 03:17:25.686234 1757367 machine.go:88] provisioning docker machine ...
	I0817 03:17:25.686254 1757367 ubuntu.go:169] provisioning hostname "newest-cni-20210817031538-1554185"
	I0817 03:17:25.686299 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:25.724704 1757367 main.go:130] libmachine: Using SSH client type: native
	I0817 03:17:25.724873 1757367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50503 <nil> <nil>}
	I0817 03:17:25.724887 1757367 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210817031538-1554185 && echo "newest-cni-20210817031538-1554185" | sudo tee /etc/hostname
	I0817 03:17:25.725451 1757367 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0817 03:17:28.850133 1757367 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210817031538-1554185
	
	I0817 03:17:28.850199 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:28.890023 1757367 main.go:130] libmachine: Using SSH client type: native
	I0817 03:17:28.890192 1757367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50503 <nil> <nil>}
	I0817 03:17:28.890221 1757367 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210817031538-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210817031538-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210817031538-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 03:17:29.010228 1757367 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 03:17:29.010254 1757367 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 03:17:29.010289 1757367 ubuntu.go:177] setting up certificates
	I0817 03:17:29.010298 1757367 provision.go:83] configureAuth start
	I0817 03:17:29.010351 1757367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210817031538-1554185
	I0817 03:17:29.043727 1757367 provision.go:138] copyHostCerts
	I0817 03:17:29.043785 1757367 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 03:17:29.043798 1757367 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 03:17:29.043853 1757367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 03:17:29.043927 1757367 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 03:17:29.043939 1757367 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 03:17:29.043964 1757367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 03:17:29.044011 1757367 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 03:17:29.044021 1757367 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 03:17:29.044041 1757367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 03:17:29.044112 1757367 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210817031538-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210817031538-1554185]
	I0817 03:17:29.308896 1757367 provision.go:172] copyRemoteCerts
	I0817 03:17:29.308953 1757367 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 03:17:29.308997 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:29.338849 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:17:29.420648 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 03:17:29.435629 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0817 03:17:29.450565 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 03:17:29.464932 1757367 provision.go:86] duration metric: configureAuth took 454.623957ms
	I0817 03:17:29.464952 1757367 ubuntu.go:193] setting minikube options for container-runtime
	I0817 03:17:29.465116 1757367 config.go:177] Loaded profile config "newest-cni-20210817031538-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:17:29.465125 1757367 machine.go:91] provisioned docker machine in 3.778885025s
	I0817 03:17:29.465132 1757367 start.go:267] post-start starting for "newest-cni-20210817031538-1554185" (driver="docker")
	I0817 03:17:29.465139 1757367 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 03:17:29.465182 1757367 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 03:17:29.465216 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:29.495469 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:17:29.576507 1757367 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 03:17:29.578909 1757367 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 03:17:29.578939 1757367 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 03:17:29.578951 1757367 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 03:17:29.578960 1757367 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 03:17:29.578970 1757367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 03:17:29.579014 1757367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 03:17:29.579103 1757367 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 03:17:29.579198 1757367 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 03:17:29.584741 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:17:29.599075 1757367 start.go:270] post-start completed in 133.932037ms
	I0817 03:17:29.599117 1757367 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 03:17:29.599153 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:29.629395 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:17:29.711161 1757367 fix.go:57] fixHost completed within 4.531666977s
	I0817 03:17:29.711177 1757367 start.go:80] releasing machines lock for "newest-cni-20210817031538-1554185", held for 4.531709684s
	I0817 03:17:29.711241 1757367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210817031538-1554185
	I0817 03:17:29.761806 1757367 ssh_runner.go:149] Run: systemctl --version
	I0817 03:17:29.761851 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:29.761879 1757367 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 03:17:29.761932 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:17:29.833366 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:17:29.849203 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:17:30.100448 1757367 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 03:17:30.112101 1757367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 03:17:30.120206 1757367 docker.go:153] disabling docker service ...
	I0817 03:17:30.120242 1757367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 03:17:30.128879 1757367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 03:17:30.136660 1757367 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 03:17:30.205674 1757367 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 03:17:30.282520 1757367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 03:17:30.290427 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 03:17:30.300976 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 03:17:30.311923 1757367 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 03:17:30.317176 1757367 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 03:17:30.322396 1757367 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 03:17:30.394562 1757367 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 03:17:30.523003 1757367 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 03:17:30.523101 1757367 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 03:17:30.526569 1757367 start.go:413] Will wait 60s for crictl version
	I0817 03:17:30.526647 1757367 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:17:30.555522 1757367 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-17T03:17:30Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0817 03:17:41.602309 1757367 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:17:41.626249 1757367 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 03:17:41.626295 1757367 ssh_runner.go:149] Run: containerd --version
	I0817 03:17:41.647551 1757367 ssh_runner.go:149] Run: containerd --version
	I0817 03:17:41.669710 1757367 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0817 03:17:41.669775 1757367 cli_runner.go:115] Run: docker network inspect newest-cni-20210817031538-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 03:17:41.698711 1757367 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 03:17:41.701518 1757367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:17:41.711319 1757367 out.go:177]   - kubelet.network-plugin=cni
	I0817 03:17:41.713014 1757367 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0817 03:17:41.713068 1757367 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0817 03:17:41.713126 1757367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:17:41.738639 1757367 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:17:41.738653 1757367 containerd.go:517] Images already preloaded, skipping extraction
	I0817 03:17:41.738687 1757367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:17:41.764883 1757367 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:17:41.764898 1757367 cache_images.go:74] Images are preloaded, skipping loading
	I0817 03:17:41.764942 1757367 ssh_runner.go:149] Run: sudo crictl info
	I0817 03:17:41.797816 1757367 cni.go:93] Creating CNI manager for ""
	I0817 03:17:41.797838 1757367 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:17:41.797863 1757367 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0817 03:17:41.797884 1757367 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210817031538-1554185 NodeName:newest-cni-20210817031538-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true l
eader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 03:17:41.798049 1757367 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20210817031538-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 03:17:41.798148 1757367 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210817031538-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817031538-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 03:17:41.798209 1757367 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0817 03:17:41.804886 1757367 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 03:17:41.804933 1757367 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 03:17:41.812508 1757367 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (620 bytes)
	I0817 03:17:41.824471 1757367 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 03:17:41.839852 1757367 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0817 03:17:41.851112 1757367 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 03:17:41.853946 1757367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:17:41.863529 1757367 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185 for IP: 192.168.49.2
	I0817 03:17:41.863572 1757367 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 03:17:41.863591 1757367 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 03:17:41.863640 1757367 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/client.key
	I0817 03:17:41.863658 1757367 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/apiserver.key.dd3b5fb2
	I0817 03:17:41.863680 1757367 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/proxy-client.key
	I0817 03:17:41.863776 1757367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 03:17:41.863814 1757367 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 03:17:41.863828 1757367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 03:17:41.863852 1757367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 03:17:41.863883 1757367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 03:17:41.863908 1757367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 03:17:41.863953 1757367 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:17:41.865051 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 03:17:41.889285 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 03:17:41.910628 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 03:17:41.929484 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210817031538-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 03:17:41.954093 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 03:17:41.978403 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 03:17:41.998169 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 03:17:42.013452 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 03:17:42.028753 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 03:17:42.043330 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 03:17:42.059012 1757367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 03:17:42.073148 1757367 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 03:17:42.083617 1757367 ssh_runner.go:149] Run: openssl version
	I0817 03:17:42.088941 1757367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 03:17:42.095676 1757367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 03:17:42.098315 1757367 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 03:17:42.098371 1757367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 03:17:42.102403 1757367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 03:17:42.108391 1757367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 03:17:42.114349 1757367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 03:17:42.116938 1757367 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 03:17:42.116976 1757367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 03:17:42.121053 1757367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 03:17:42.126699 1757367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 03:17:42.132744 1757367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:17:42.135272 1757367 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:17:42.135310 1757367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:17:42.139756 1757367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 03:17:42.145496 1757367 kubeadm.go:390] StartCluster: {Name:newest-cni-20210817031538-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817031538-1554185 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map
[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:17:42.145589 1757367 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 03:17:42.145644 1757367 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:17:42.171369 1757367 cri.go:76] found id: "cc900511df519a93f555f3416a221d935c394528a2061fe6e527c0d2a927e9b2"
	I0817 03:17:42.171392 1757367 cri.go:76] found id: "c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a"
	I0817 03:17:42.171398 1757367 cri.go:76] found id: "ef9c6f6e4c7fca951fc6ee6b33eeda5a8114bcd5b931915f7e4c7bafee6857da"
	I0817 03:17:42.171419 1757367 cri.go:76] found id: "0095f08395273c8984cca9e310f7ff709d93e7a80eea7ff384fe93ff0ca3f587"
	I0817 03:17:42.171423 1757367 cri.go:76] found id: "21668b68e79b6a717d938adec524630fe75caae401a2caf1d1d8a4ea4a361158"
	I0817 03:17:42.171427 1757367 cri.go:76] found id: "a2bfe6f66f2f948ce0f8370ecadfe2dff535f02ce23800223a2bf18c52238e58"
	I0817 03:17:42.171437 1757367 cri.go:76] found id: ""
	I0817 03:17:42.171468 1757367 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0817 03:17:42.185254 1757367 cri.go:103] JSON = null
	W0817 03:17:42.185303 1757367 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0817 03:17:42.185349 1757367 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 03:17:42.192145 1757367 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 03:17:42.192162 1757367 kubeadm.go:600] restartCluster start
	I0817 03:17:42.192197 1757367 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 03:17:42.197552 1757367 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:42.198421 1757367 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210817031538-1554185" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:17:42.198651 1757367 kubeconfig.go:128] "newest-cni-20210817031538-1554185" context is missing from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0817 03:17:42.199144 1757367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:17:42.201287 1757367 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 03:17:42.207462 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:42.207519 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:42.216278 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:42.416610 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:42.416653 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:42.425139 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:42.616343 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:42.616415 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:42.624937 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:42.817032 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:42.817131 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:42.827111 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:43.017392 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:43.017468 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:43.027083 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:43.217326 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:43.217370 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:43.226041 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:43.417248 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:43.417322 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:43.425848 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:43.617094 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:43.617172 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:43.625990 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:43.817251 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:43.817312 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:43.827497 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:44.016845 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:44.016914 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:44.026276 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:44.216401 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:44.216476 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:44.225079 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:44.417330 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:44.417372 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:44.425977 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:44.617170 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:44.617244 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:44.625726 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:44.817265 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:44.817349 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:44.827569 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:45.016877 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:45.016941 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:45.026740 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:45.216999 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:45.217043 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:45.225669 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:45.225680 1757367 api_server.go:164] Checking apiserver status ...
	I0817 03:17:45.225713 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 03:17:45.234106 1757367 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:45.234151 1757367 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0817 03:17:45.234169 1757367 kubeadm.go:1032] stopping kube-system containers ...
	I0817 03:17:45.234178 1757367 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0817 03:17:45.234225 1757367 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:17:45.257425 1757367 cri.go:76] found id: "cc900511df519a93f555f3416a221d935c394528a2061fe6e527c0d2a927e9b2"
	I0817 03:17:45.257442 1757367 cri.go:76] found id: "c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a"
	I0817 03:17:45.257447 1757367 cri.go:76] found id: "ef9c6f6e4c7fca951fc6ee6b33eeda5a8114bcd5b931915f7e4c7bafee6857da"
	I0817 03:17:45.257451 1757367 cri.go:76] found id: "0095f08395273c8984cca9e310f7ff709d93e7a80eea7ff384fe93ff0ca3f587"
	I0817 03:17:45.257475 1757367 cri.go:76] found id: "21668b68e79b6a717d938adec524630fe75caae401a2caf1d1d8a4ea4a361158"
	I0817 03:17:45.257486 1757367 cri.go:76] found id: "a2bfe6f66f2f948ce0f8370ecadfe2dff535f02ce23800223a2bf18c52238e58"
	I0817 03:17:45.257490 1757367 cri.go:76] found id: ""
	I0817 03:17:45.257495 1757367 cri.go:221] Stopping containers: [cc900511df519a93f555f3416a221d935c394528a2061fe6e527c0d2a927e9b2 c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a ef9c6f6e4c7fca951fc6ee6b33eeda5a8114bcd5b931915f7e4c7bafee6857da 0095f08395273c8984cca9e310f7ff709d93e7a80eea7ff384fe93ff0ca3f587 21668b68e79b6a717d938adec524630fe75caae401a2caf1d1d8a4ea4a361158 a2bfe6f66f2f948ce0f8370ecadfe2dff535f02ce23800223a2bf18c52238e58]
	I0817 03:17:45.257531 1757367 ssh_runner.go:149] Run: which crictl
	I0817 03:17:45.259939 1757367 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop cc900511df519a93f555f3416a221d935c394528a2061fe6e527c0d2a927e9b2 c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a ef9c6f6e4c7fca951fc6ee6b33eeda5a8114bcd5b931915f7e4c7bafee6857da 0095f08395273c8984cca9e310f7ff709d93e7a80eea7ff384fe93ff0ca3f587 21668b68e79b6a717d938adec524630fe75caae401a2caf1d1d8a4ea4a361158 a2bfe6f66f2f948ce0f8370ecadfe2dff535f02ce23800223a2bf18c52238e58
	I0817 03:17:45.282084 1757367 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 03:17:45.290821 1757367 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 03:17:45.296642 1757367 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 17 03:16 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 17 03:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2071 Aug 17 03:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 17 03:16 /etc/kubernetes/scheduler.conf
	
	I0817 03:17:45.296684 1757367 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0817 03:17:45.302445 1757367 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0817 03:17:45.307967 1757367 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0817 03:17:45.313342 1757367 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:45.313406 1757367 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 03:17:45.318637 1757367 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0817 03:17:45.324252 1757367 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 03:17:45.324303 1757367 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 03:17:45.329495 1757367 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 03:17:45.335010 1757367 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 03:17:45.335029 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:17:45.395171 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:17:47.603212 1757367 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.207984373s)
	I0817 03:17:47.603242 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:17:47.752604 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:17:47.879104 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:17:48.003375 1757367 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:17:48.003426 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:48.513918 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:49.014079 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:49.513492 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:50.014314 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:50.514436 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:51.014230 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:51.513482 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:52.013528 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:52.513701 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:53.014116 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:53.513466 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:54.013795 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:54.513565 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:17:54.528870 1757367 api_server.go:70] duration metric: took 6.525497605s to wait for apiserver process to appear ...
	I0817 03:17:54.528886 1757367 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:17:54.528894 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:17:59.532693 1757367 api_server.go:255] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 03:18:00.033389 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:18:01.005646 1757367 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 03:18:01.005664 1757367 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 03:18:01.033720 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:18:01.102602 1757367 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 03:18:01.102619 1757367 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 03:18:01.532841 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:18:01.540973 1757367 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:18:01.541018 1757367 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:18:02.033196 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:18:02.041346 1757367 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 03:18:02.041372 1757367 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 03:18:02.532836 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:18:02.541511 1757367 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:18:02.555121 1757367 api_server.go:139] control plane version: v1.22.0-rc.0
	I0817 03:18:02.555142 1757367 api_server.go:129] duration metric: took 8.02625102s to wait for apiserver health ...
	I0817 03:18:02.555151 1757367 cni.go:93] Creating CNI manager for ""
	I0817 03:18:02.555158 1757367 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 03:18:02.557265 1757367 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 03:18:02.557321 1757367 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 03:18:02.561371 1757367 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl ...
	I0817 03:18:02.561384 1757367 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 03:18:02.574495 1757367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 03:18:02.809600 1757367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:18:02.835213 1757367 system_pods.go:59] 9 kube-system pods found
	I0817 03:18:02.835247 1757367 system_pods.go:61] "coredns-78fcd69978-x8zkx" [95714572-ca09-4e17-981a-934153d9c863] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0817 03:18:02.835262 1757367 system_pods.go:61] "etcd-newest-cni-20210817031538-1554185" [97b912ad-8a27-4b97-b545-b16a9c788a47] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 03:18:02.835282 1757367 system_pods.go:61] "kindnet-w8m9q" [caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 03:18:02.835295 1757367 system_pods.go:61] "kube-apiserver-newest-cni-20210817031538-1554185" [92524635-9c14-45bd-8b34-fba35775cc9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 03:18:02.835352 1757367 system_pods.go:61] "kube-controller-manager-newest-cni-20210817031538-1554185" [79f2907d-b3cd-49a7-9d38-194068fe6579] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 03:18:02.835365 1757367 system_pods.go:61] "kube-proxy-clj8s" [929df3e0-ea05-4a55-b16d-ac959dbf86a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 03:18:02.835378 1757367 system_pods.go:61] "kube-scheduler-newest-cni-20210817031538-1554185" [5c5510bd-a46c-4369-b82d-8774f7d679d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 03:18:02.835390 1757367 system_pods.go:61] "metrics-server-7c784ccb57-kfrc2" [71464a64-042a-490f-ac6a-1e85150897c3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0817 03:18:02.835401 1757367 system_pods.go:61] "storage-provisioner" [d9c8ddfd-73dd-4bc3-9638-50cf0b37a760] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0817 03:18:02.835411 1757367 system_pods.go:74] duration metric: took 25.789786ms to wait for pod list to return data ...
	I0817 03:18:02.835421 1757367 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:18:02.841456 1757367 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:18:02.841482 1757367 node_conditions.go:123] node cpu capacity is 2
	I0817 03:18:02.841494 1757367 node_conditions.go:105] duration metric: took 6.067073ms to run NodePressure ...
	I0817 03:18:02.841507 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 03:18:03.123328 1757367 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 03:18:03.132589 1757367 ops.go:34] apiserver oom_adj: -16
	I0817 03:18:03.132628 1757367 kubeadm.go:604] restartCluster took 20.940459247s
	I0817 03:18:03.132646 1757367 kubeadm.go:392] StartCluster complete in 20.987153807s
	I0817 03:18:03.132671 1757367 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:18:03.132755 1757367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:18:03.133693 1757367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:18:03.137919 1757367 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210817031538-1554185" rescaled to 1
	I0817 03:18:03.137970 1757367 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0817 03:18:03.139927 1757367 out.go:177] * Verifying Kubernetes components...
	I0817 03:18:03.139993 1757367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:18:03.138221 1757367 config.go:177] Loaded profile config "newest-cni-20210817031538-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 03:18:03.138237 1757367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 03:18:03.138247 1757367 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0817 03:18:03.140140 1757367 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210817031538-1554185"
	I0817 03:18:03.140153 1757367 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210817031538-1554185"
	W0817 03:18:03.140159 1757367 addons.go:147] addon storage-provisioner should already be in state true
	I0817 03:18:03.140159 1757367 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210817031538-1554185"
	I0817 03:18:03.140174 1757367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210817031538-1554185"
	I0817 03:18:03.140180 1757367 host.go:66] Checking if "newest-cni-20210817031538-1554185" exists ...
	I0817 03:18:03.140450 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:18:03.140649 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:18:03.140710 1757367 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210817031538-1554185"
	I0817 03:18:03.140719 1757367 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210817031538-1554185"
	W0817 03:18:03.140724 1757367 addons.go:147] addon metrics-server should already be in state true
	I0817 03:18:03.140740 1757367 host.go:66] Checking if "newest-cni-20210817031538-1554185" exists ...
	I0817 03:18:03.141137 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:18:03.141300 1757367 addons.go:59] Setting dashboard=true in profile "newest-cni-20210817031538-1554185"
	I0817 03:18:03.141311 1757367 addons.go:135] Setting addon dashboard=true in "newest-cni-20210817031538-1554185"
	W0817 03:18:03.141316 1757367 addons.go:147] addon dashboard should already be in state true
	I0817 03:18:03.141332 1757367 host.go:66] Checking if "newest-cni-20210817031538-1554185" exists ...
	I0817 03:18:03.141725 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:18:03.224681 1757367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 03:18:03.224844 1757367 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:18:03.224856 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 03:18:03.224903 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:18:03.291242 1757367 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0817 03:18:03.293968 1757367 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0817 03:18:03.294014 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0817 03:18:03.294022 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0817 03:18:03.294067 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:18:03.341091 1757367 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0817 03:18:03.341143 1757367 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 03:18:03.341152 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 03:18:03.341198 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:18:03.337861 1757367 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210817031538-1554185"
	W0817 03:18:03.341369 1757367 addons.go:147] addon default-storageclass should already be in state true
	I0817 03:18:03.341394 1757367 host.go:66] Checking if "newest-cni-20210817031538-1554185" exists ...
	I0817 03:18:03.341836 1757367 cli_runner.go:115] Run: docker container inspect newest-cni-20210817031538-1554185 --format={{.State.Status}}
	I0817 03:18:03.396354 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:18:03.418272 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:18:03.448512 1757367 api_server.go:50] waiting for apiserver process to appear ...
	I0817 03:18:03.448568 1757367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 03:18:03.448704 1757367 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 03:18:03.467104 1757367 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 03:18:03.467118 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 03:18:03.467166 1757367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817031538-1554185
	I0817 03:18:03.499885 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:18:03.530405 1757367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210817031538-1554185/id_rsa Username:docker}
	I0817 03:18:03.534743 1757367 api_server.go:70] duration metric: took 396.74695ms to wait for apiserver process to appear ...
	I0817 03:18:03.534759 1757367 api_server.go:86] waiting for apiserver healthz status ...
	I0817 03:18:03.534767 1757367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 03:18:03.543922 1757367 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 03:18:03.544773 1757367 api_server.go:139] control plane version: v1.22.0-rc.0
	I0817 03:18:03.544824 1757367 api_server.go:129] duration metric: took 10.059861ms to wait for apiserver health ...
	I0817 03:18:03.544845 1757367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 03:18:03.550790 1757367 system_pods.go:59] 9 kube-system pods found
	I0817 03:18:03.550898 1757367 system_pods.go:61] "coredns-78fcd69978-x8zkx" [95714572-ca09-4e17-981a-934153d9c863] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0817 03:18:03.550922 1757367 system_pods.go:61] "etcd-newest-cni-20210817031538-1554185" [97b912ad-8a27-4b97-b545-b16a9c788a47] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 03:18:03.550941 1757367 system_pods.go:61] "kindnet-w8m9q" [caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 03:18:03.550981 1757367 system_pods.go:61] "kube-apiserver-newest-cni-20210817031538-1554185" [92524635-9c14-45bd-8b34-fba35775cc9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 03:18:03.551025 1757367 system_pods.go:61] "kube-controller-manager-newest-cni-20210817031538-1554185" [79f2907d-b3cd-49a7-9d38-194068fe6579] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0817 03:18:03.551067 1757367 system_pods.go:61] "kube-proxy-clj8s" [929df3e0-ea05-4a55-b16d-ac959dbf86a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 03:18:03.551089 1757367 system_pods.go:61] "kube-scheduler-newest-cni-20210817031538-1554185" [5c5510bd-a46c-4369-b82d-8774f7d679d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 03:18:03.551118 1757367 system_pods.go:61] "metrics-server-7c784ccb57-kfrc2" [71464a64-042a-490f-ac6a-1e85150897c3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0817 03:18:03.551139 1757367 system_pods.go:61] "storage-provisioner" [d9c8ddfd-73dd-4bc3-9638-50cf0b37a760] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0817 03:18:03.551156 1757367 system_pods.go:74] duration metric: took 6.295345ms to wait for pod list to return data ...
	I0817 03:18:03.551173 1757367 default_sa.go:34] waiting for default service account to be created ...
	I0817 03:18:03.561337 1757367 default_sa.go:45] found service account: "default"
	I0817 03:18:03.561355 1757367 default_sa.go:55] duration metric: took 10.154621ms for default service account to be created ...
	I0817 03:18:03.561363 1757367 kubeadm.go:547] duration metric: took 423.37027ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0817 03:18:03.561380 1757367 node_conditions.go:102] verifying NodePressure condition ...
	I0817 03:18:03.565050 1757367 node_conditions.go:122] node storage ephemeral capacity is 81118084Ki
	I0817 03:18:03.565069 1757367 node_conditions.go:123] node cpu capacity is 2
	I0817 03:18:03.565080 1757367 node_conditions.go:105] duration metric: took 3.692443ms to run NodePressure ...
	I0817 03:18:03.565092 1757367 start.go:231] waiting for startup goroutines ...
	I0817 03:18:03.662677 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0817 03:18:03.662693 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0817 03:18:03.676750 1757367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:18:03.727394 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0817 03:18:03.727410 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0817 03:18:03.734433 1757367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 03:18:03.741690 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0817 03:18:03.741730 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0817 03:18:03.777425 1757367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 03:18:03.777470 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0817 03:18:03.786772 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0817 03:18:03.786852 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0817 03:18:03.801896 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0817 03:18:03.801939 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0817 03:18:03.819288 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0817 03:18:03.819322 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0817 03:18:03.837158 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0817 03:18:03.837191 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0817 03:18:03.850394 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0817 03:18:03.850427 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0817 03:18:03.857456 1757367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 03:18:03.857497 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 03:18:03.879650 1757367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 03:18:03.879696 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 03:18:03.896680 1757367 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 03:18:03.896716 1757367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0817 03:18:03.924418 1757367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 03:18:03.964779 1757367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 03:18:04.300443 1757367 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210817031538-1554185"
	I0817 03:18:04.373100 1757367 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0817 03:18:04.373164 1757367 addons.go:344] enableAddons completed in 1.23491856s
	I0817 03:18:04.436515 1757367 start.go:462] kubectl: 1.21.3, cluster: 1.22.0-rc.0 (minor skew: 1)
	I0817 03:18:04.438325 1757367 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210817031538-1554185" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	3480b6e9d1801       f37b7c809e5dc       16 seconds ago       Running             kindnet-cni               1                   fe7215298d43c
	ab86eee981b61       5f7fafb97c956       16 seconds ago       Running             kube-proxy                1                   e81492055fbc1
	482d4f309ae03       6fe8178781397       25 seconds ago       Running             kube-apiserver            1                   c0a8d7a4730f4
	6be36ee527254       2252d5eb703b0       25 seconds ago       Running             etcd                      1                   e285d26d47a9f
	ac3b3907e6fe2       41065afd0ca8b       25 seconds ago       Running             kube-controller-manager   1                   ac78a36df6a81
	4d71197216232       82ecd1e357878       25 seconds ago       Running             kube-scheduler            1                   085c3ba8f1d3f
	cc900511df519       f37b7c809e5dc       About a minute ago   Exited              kindnet-cni               0                   363ba39c89949
	c06aefb58ab88       5f7fafb97c956       About a minute ago   Exited              kube-proxy                0                   23a3da90f03dd
	ef9c6f6e4c7fc       2252d5eb703b0       About a minute ago   Exited              etcd                      0                   bf152b7155443
	0095f08395273       41065afd0ca8b       About a minute ago   Exited              kube-controller-manager   0                   ce251a441c8cf
	21668b68e79b6       6fe8178781397       About a minute ago   Exited              kube-apiserver            0                   c96eadb0c989c
	a2bfe6f66f2f9       82ecd1e357878       About a minute ago   Exited              kube-scheduler            0                   84b0f300d329a
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2021-08-17 03:17:25 UTC, end at Tue 2021-08-17 03:18:19 UTC. --
	Aug 17 03:17:54 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:17:54.175830290Z" level=info msg="StartContainer for \"482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9\" returns successfully"
	Aug 17 03:17:54 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:17:54.181469207Z" level=info msg="StartContainer for \"4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727\" returns successfully"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:01.053474115Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.577027954Z" level=info msg="StopPodSandbox for \"23a3da90f03dd5806a31740f8a18e34a8b2ed7a60fe2a5513d178ffb03d3a481\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.577101053Z" level=info msg="Container to stop \"c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.577192326Z" level=info msg="TearDown network for sandbox \"23a3da90f03dd5806a31740f8a18e34a8b2ed7a60fe2a5513d178ffb03d3a481\" successfully"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.577218885Z" level=info msg="StopPodSandbox for \"23a3da90f03dd5806a31740f8a18e34a8b2ed7a60fe2a5513d178ffb03d3a481\" returns successfully"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.577915182Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-clj8s,Uid:929df3e0-ea05-4a55-b16d-ac959dbf86a7,Namespace:kube-system,Attempt:1,}"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.585035192Z" level=info msg="StopPodSandbox for \"363ba39c89949d241f8c07e21761fdeb9ae46ae9b26a1a7cccd5f3c1530f47fe\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.585107035Z" level=info msg="Container to stop \"cc900511df519a93f555f3416a221d935c394528a2061fe6e527c0d2a927e9b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.585183433Z" level=info msg="TearDown network for sandbox \"363ba39c89949d241f8c07e21761fdeb9ae46ae9b26a1a7cccd5f3c1530f47fe\" successfully"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.585194181Z" level=info msg="StopPodSandbox for \"363ba39c89949d241f8c07e21761fdeb9ae46ae9b26a1a7cccd5f3c1530f47fe\" returns successfully"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.585943769Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-w8m9q,Uid:caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf,Namespace:kube-system,Attempt:1,}"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.607911318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075 pid=1129
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.625750251Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217 pid=1151
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.707083116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clj8s,Uid:929df3e0-ea05-4a55-b16d-ac959dbf86a7,Namespace:kube-system,Attempt:1,} returns sandbox id \"e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.709470587Z" level=info msg="CreateContainer within sandbox \"e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.728965627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-w8m9q,Uid:caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf,Namespace:kube-system,Attempt:1,} returns sandbox id \"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.732686320Z" level=info msg="CreateContainer within sandbox \"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.766499509Z" level=info msg="CreateContainer within sandbox \"e81492055fbc1734a279e9bf49c0402596841698dd8fb887ad02a9f05f8a4075\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.766974933Z" level=info msg="StartContainer for \"ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.787367187Z" level=info msg="CreateContainer within sandbox \"fe7215298d43c641e1afd3610bbda6ea9db22c1a88da417969a43c130cad7217\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.790196779Z" level=info msg="StartContainer for \"3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180\""
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.945780700Z" level=info msg="StartContainer for \"ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d\" returns successfully"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 containerd[344]: time="2021-08-17T03:18:02.946322192Z" level=info msg="StartContainer for \"3480b6e9d1801dfee8d6e3ad0e2fbb16c8d68f391aeb9cf872891e3b2ba4e180\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001055] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000803] FS-Cache: N-cookie c=00000000a5abbd67 [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001308] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000d5d40782
	[  +0.001092] FS-Cache: N-key=[8] '983a040000000000'
	[Aug17 02:20] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000000d50d5d28 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001350] FS-Cache: O-cookie d=00000000570bf7f0 n=000000006f79c1b5
	[  +0.001072] FS-Cache: O-key=[8] '743a040000000000'
	[  +0.000817] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000570bf7f0 n=000000008bd3c191
	[  +0.001074] FS-Cache: N-key=[8] '743a040000000000'
	[  +0.001419] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=00000000a3b9a9c3 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000e8b4f969
	[  +0.001071] FS-Cache: O-key=[8] '983a040000000000'
	[  +0.000811] FS-Cache: N-cookie c=00000000529db89c [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001321] FS-Cache: N-cookie d=00000000570bf7f0 n=000000009f5facc3
	[  +0.001071] FS-Cache: N-key=[8] '983a040000000000'
	[  +0.001477] FS-Cache: Duplicate cookie detected
	[  +0.000815] FS-Cache: O-cookie c=0000000077b302b8 [p=00000000e2b8578e fl=226 nc=0 na=1]
	[  +0.001351] FS-Cache: O-cookie d=00000000570bf7f0 n=00000000f2917e65
	[  +0.001073] FS-Cache: O-key=[8] '763a040000000000'
	[  +0.000821] FS-Cache: N-cookie c=00000000031d0fcc [p=00000000e2b8578e fl=2 nc=0 na=1]
	[  +0.001314] FS-Cache: N-cookie d=00000000570bf7f0 n=00000000be710872
	[  +0.001064] FS-Cache: N-key=[8] '763a040000000000'
	
	* 
	* ==> etcd [6be36ee5272547b2d7bf82f3babc1f131f522574f0e800e07bc1fd59531d88ae] <==
	* 2021-08-17 03:17:54.015168 I | embed: initial advertise peer URLs = https://192.168.49.2:2380
	2021-08-17 03:17:54.015173 I | embed: initial cluster = 
	2021-08-17 03:17:54.149977 I | etcdserver: restarting member aec36adc501070cc in cluster fa54960ea34d58be at commit index 532
	raft2021/08/17 03:17:54 INFO: aec36adc501070cc switched to configuration voters=()
	raft2021/08/17 03:17:54 INFO: aec36adc501070cc became follower at term 2
	raft2021/08/17 03:17:54 INFO: newRaft aec36adc501070cc [peers: [], term: 2, commit: 532, applied: 0, lastindex: 532, lastterm: 2]
	2021-08-17 03:17:54.154239 W | auth: simple token is not cryptographically signed
	2021-08-17 03:17:54.210159 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	raft2021/08/17 03:17:54 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 03:17:54.213626 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-17 03:17:54.214199 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 03:17:54.214400 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 03:17:54.221676 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 03:17:54.221805 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-17 03:17:54.221881 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/17 03:17:55 INFO: aec36adc501070cc is starting a new election at term 2
	raft2021/08/17 03:17:55 INFO: aec36adc501070cc became candidate at term 3
	raft2021/08/17 03:17:55 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3
	raft2021/08/17 03:17:55 INFO: aec36adc501070cc became leader at term 3
	raft2021/08/17 03:17:55 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3
	2021-08-17 03:17:55.170718 I | etcdserver: published {Name:newest-cni-20210817031538-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 03:17:55.170884 I | embed: ready to serve client requests
	2021-08-17 03:17:55.172219 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 03:17:55.193186 I | embed: ready to serve client requests
	2021-08-17 03:17:55.212171 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> etcd [ef9c6f6e4c7fca951fc6ee6b33eeda5a8114bcd5b931915f7e4c7bafee6857da] <==
	* raft2021/08/17 03:16:39 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2021/08/17 03:16:39 INFO: aec36adc501070cc became follower at term 1
	raft2021/08/17 03:16:39 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 03:16:39.949180 W | auth: simple token is not cryptographically signed
	2021-08-17 03:16:39.976978 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-17 03:16:39.986520 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/17 03:16:39 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-17 03:16:39.987052 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-17 03:16:39.997824 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 03:16:39.997949 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-17 03:16:39.998016 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/17 03:16:40 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/17 03:16:40 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/17 03:16:40 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/17 03:16:40 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/17 03:16:40 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-17 03:16:40.556513 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 03:16:40.572655 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 03:16:40.572808 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 03:16:40.572896 I | etcdserver: published {Name:newest-cni-20210817031538-1554185 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-17 03:16:40.573079 I | embed: ready to serve client requests
	2021-08-17 03:16:40.574392 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-17 03:16:40.574552 I | embed: ready to serve client requests
	2021-08-17 03:16:40.575687 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 03:17:03.888313 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  03:18:29 up 11:00,  0 users,  load average: 3.32, 2.52, 1.98
	Linux newest-cni-20210817031538-1554185 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [21668b68e79b6a717d938adec524630fe75caae401a2caf1d1d8a4ea4a361158] <==
	* E0817 03:16:45.883273       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0817 03:16:46.007349       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0817 03:16:46.008194       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 03:16:46.019829       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 03:16:46.051349       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0817 03:16:46.051608       1 cache.go:39] Caches are synced for autoregister controller
	I0817 03:16:46.056217       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0817 03:16:46.070364       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0817 03:16:46.106482       1 controller.go:611] quota admission added evaluator for: namespaces
	I0817 03:16:46.806392       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 03:16:46.806498       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 03:16:46.818393       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0817 03:16:46.835304       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0817 03:16:46.835330       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 03:16:47.282206       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 03:16:47.327702       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 03:16:47.457219       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0817 03:16:47.458178       1 controller.go:611] quota admission added evaluator for: endpoints
	I0817 03:16:47.461771       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 03:16:48.013151       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 03:16:49.229763       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 03:16:49.266703       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 03:16:54.580523       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 03:17:01.423614       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0817 03:17:01.589189       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [482d4f309ae035f4dffa677808241b06e873f94937cb4d02cf6f4b874082f4c9] <==
	* I0817 03:18:00.887781       1 controller.go:83] Starting OpenAPI AggregationController
	I0817 03:18:00.887861       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0817 03:18:00.887934       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0817 03:18:01.143984       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0817 03:18:01.170181       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0817 03:18:01.170668       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0817 03:18:01.171581       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 03:18:01.172644       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 03:18:01.173090       1 cache.go:39] Caches are synced for autoregister controller
	I0817 03:18:01.186686       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0817 03:18:01.197589       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0817 03:18:01.245265       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 03:18:01.892509       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0817 03:18:01.983829       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 03:18:01.983863       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 03:18:02.798283       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	W0817 03:18:02.981763       1 handler_proxy.go:104] no RequestInfo found in the context
	E0817 03:18:02.981825       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 03:18:02.981837       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 03:18:02.986072       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 03:18:03.015915       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 03:18:03.109304       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 03:18:03.116144       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0817 03:18:04.246496       1 controller.go:611] quota admission added evaluator for: namespaces
	
	* 
	* ==> kube-controller-manager [0095f08395273c8984cca9e310f7ff709d93e7a80eea7ff384fe93ff0ca3f587] <==
	* I0817 03:17:00.905515       1 range_allocator.go:373] Set node newest-cni-20210817031538-1554185 PodCIDR to [192.168.0.0/24]
	I0817 03:17:00.905540       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-newest-cni-20210817031538-1554185" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 03:17:00.917852       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0817 03:17:00.926716       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0817 03:17:00.927144       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-newest-cni-20210817031538-1554185" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 03:17:00.957519       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0817 03:17:00.963921       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0817 03:17:00.971402       1 shared_informer.go:247] Caches are synced for attach detach 
	I0817 03:17:01.046318       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 03:17:01.067536       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 03:17:01.070853       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0817 03:17:01.430722       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-clj8s"
	I0817 03:17:01.434536       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w8m9q"
	I0817 03:17:01.546476       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 03:17:01.546499       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 03:17:01.555988       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 03:17:01.596460       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I0817 03:17:01.889433       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-x8zkx"
	I0817 03:17:01.910933       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-2mpcf"
	I0817 03:17:02.060976       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I0817 03:17:02.102734       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-2mpcf"
	I0817 03:17:03.976197       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0817 03:17:04.000929       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0817 03:17:04.021218       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0817 03:17:04.047572       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-kfrc2"
	
	* 
	* ==> kube-controller-manager [ac3b3907e6fe2be3b974d04cc4e05e089eda3dff2ffa1462e30711de070a1935] <==
	* I0817 03:18:05.021252       1 shared_informer.go:240] Waiting for caches to sync for PV protection
	I0817 03:18:05.025260       1 controllermanager.go:577] Started "ttl"
	I0817 03:18:05.025611       1 ttl_controller.go:121] Starting TTL controller
	I0817 03:18:05.025698       1 shared_informer.go:240] Waiting for caches to sync for TTL
	I0817 03:18:05.039322       1 controllermanager.go:577] Started "root-ca-cert-publisher"
	I0817 03:18:05.039544       1 publisher.go:107] Starting root CA certificate configmap publisher
	I0817 03:18:05.040040       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	W0817 03:18:05.048117       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0817 03:18:05.067904       1 controllermanager.go:577] Started "endpointslice"
	I0817 03:18:05.068214       1 endpointslice_controller.go:257] Starting endpoint slice controller
	I0817 03:18:05.071329       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
	I0817 03:18:05.075257       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
	I0817 03:18:05.075403       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0817 03:18:05.075554       1 dynamic_serving_content.go:129] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0817 03:18:05.076393       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
	I0817 03:18:05.076552       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0817 03:18:05.076669       1 dynamic_serving_content.go:129] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0817 03:18:05.077448       1 controllermanager.go:577] Started "csrsigning"
	I0817 03:18:05.077606       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0817 03:18:05.078890       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0817 03:18:05.077625       1 dynamic_serving_content.go:129] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0817 03:18:05.077650       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	I0817 03:18:05.077662       1 dynamic_serving_content.go:129] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0817 03:18:05.079568       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0817 03:18:05.083112       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-proxy [ab86eee981b618272b91b55515001069ae0dd92ad5b20dbd6bda5078aa425c9d] <==
	* I0817 03:18:03.010014       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 03:18:03.010056       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 03:18:03.010068       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0817 03:18:03.040069       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 03:18:03.040100       1 server_others.go:212] Using iptables Proxier.
	I0817 03:18:03.040110       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 03:18:03.040126       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 03:18:03.040416       1 server.go:649] Version: v1.22.0-rc.0
	I0817 03:18:03.047840       1 config.go:315] Starting service config controller
	I0817 03:18:03.047859       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 03:18:03.048042       1 config.go:224] Starting endpoint slice config controller
	I0817 03:18:03.048054       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0817 03:18:03.095631       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210817031538-1554185.169bf9bd9f89f2f4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03ee87ac2b37ee5, ext:108618382, loc:(*time.Location)(0x2698ec0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210817031538-1554185", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"",
Name:"newest-cni-20210817031538-1554185", UID:"newest-cni-20210817031538-1554185", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210817031538-1554185.169bf9bd9f89f2f4" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0817 03:18:03.148094       1 shared_informer.go:247] Caches are synced for service config 
	I0817 03:18:03.148211       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [c06aefb58ab88d79ab9f91de7d27c7aecedb76f04889875b510a03071313f04a] <==
	* I0817 03:17:02.301489       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0817 03:17:02.301536       1 server_others.go:140] Detected node IP 192.168.49.2
	W0817 03:17:02.301551       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0817 03:17:02.341341       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 03:17:02.341369       1 server_others.go:212] Using iptables Proxier.
	I0817 03:17:02.341380       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 03:17:02.341467       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 03:17:02.341810       1 server.go:649] Version: v1.22.0-rc.0
	I0817 03:17:02.343571       1 config.go:315] Starting service config controller
	I0817 03:17:02.343584       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 03:17:02.343612       1 config.go:224] Starting endpoint slice config controller
	I0817 03:17:02.343615       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0817 03:17:02.348376       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210817031538-1554185.169bf9af7d590edf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03ee86b9464bea4, ext:92557120, loc:(*time.Location)(0x2698ec0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210817031538-1554185", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"",
Name:"newest-cni-20210817031538-1554185", UID:"newest-cni-20210817031538-1554185", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210817031538-1554185.169bf9af7d590edf" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0817 03:17:02.444623       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 03:17:02.444633       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [4d71197216232ef823404472e605ba2c876714d9e2bf8f922d9daf2592cad727] <==
	* W0817 03:17:54.224145       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0817 03:17:57.740895       1 serving.go:347] Generated self-signed cert in-memory
	W0817 03:18:01.056946       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0817 03:18:01.056978       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 03:18:01.056988       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 03:18:01.056994       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 03:18:01.137207       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0817 03:18:01.137306       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0817 03:18:01.137325       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 03:18:01.137343       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0817 03:18:01.149063       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 03:18:01.155715       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 03:18:01.155824       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 03:18:01.155900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 03:18:01.155971       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 03:18:01.156032       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0817 03:18:01.271018       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [a2bfe6f66f2f948ce0f8370ecadfe2dff535f02ce23800223a2bf18c52238e58] <==
	* I0817 03:16:46.004139       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0817 03:16:46.015048       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.031692       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.031853       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 03:16:46.032119       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.032183       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 03:16:46.032336       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0817 03:16:46.032402       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 03:16:46.032467       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 03:16:46.032517       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 03:16:46.032568       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 03:16:46.032895       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 03:16:46.036442       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 03:16:46.036611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.036775       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 03:16:46.036929       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 03:16:46.856811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.858998       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.920511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 03:16:46.931819       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 03:16:46.933785       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 03:16:47.077377       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 03:16:47.131082       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 03:16:47.297332       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 03:16:49.806361       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 03:17:25 UTC, end at Tue 2021-08-17 03:18:29 UTC. --
	Aug 17 03:18:00 newest-cni-20210817031538-1554185 kubelet[663]: E0817 03:18:00.748492     663 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210817031538-1554185\" not found"
	Aug 17 03:18:00 newest-cni-20210817031538-1554185 kubelet[663]: E0817 03:18:00.849025     663 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210817031538-1554185\" not found"
	Aug 17 03:18:00 newest-cni-20210817031538-1554185 kubelet[663]: E0817 03:18:00.952912     663 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210817031538-1554185\" not found"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.053081     663 kuberuntime_manager.go:1075] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.055685     663 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.200896     663 kubelet_node_status.go:109] "Node was previously registered" node="newest-cni-20210817031538-1554185"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.200988     663 kubelet_node_status.go:74] "Successfully registered node" node="newest-cni-20210817031538-1554185"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.948658     663 apiserver.go:52] "Watching apiserver"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.953731     663 topology_manager.go:200] "Topology Admit Handler"
	Aug 17 03:18:01 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:01.956868     663 topology_manager.go:200] "Topology Admit Handler"
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.058958     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/929df3e0-ea05-4a55-b16d-ac959dbf86a7-kube-proxy\") pod \"kube-proxy-clj8s\" (UID: \"929df3e0-ea05-4a55-b16d-ac959dbf86a7\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059014     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/929df3e0-ea05-4a55-b16d-ac959dbf86a7-lib-modules\") pod \"kube-proxy-clj8s\" (UID: \"929df3e0-ea05-4a55-b16d-ac959dbf86a7\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059045     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktdkc\" (UniqueName: \"kubernetes.io/projected/caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf-kube-api-access-ktdkc\") pod \"kindnet-w8m9q\" (UID: \"caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059072     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grkvl\" (UniqueName: \"kubernetes.io/projected/929df3e0-ea05-4a55-b16d-ac959dbf86a7-kube-api-access-grkvl\") pod \"kube-proxy-clj8s\" (UID: \"929df3e0-ea05-4a55-b16d-ac959dbf86a7\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059094     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf-cni-cfg\") pod \"kindnet-w8m9q\" (UID: \"caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059118     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf-xtables-lock\") pod \"kindnet-w8m9q\" (UID: \"caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059139     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf-lib-modules\") pod \"kindnet-w8m9q\" (UID: \"caf0d4b5-7c99-4fd7-8ec7-905fc0f4b7bf\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059162     663 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/929df3e0-ea05-4a55-b16d-ac959dbf86a7-xtables-lock\") pod \"kube-proxy-clj8s\" (UID: \"929df3e0-ea05-4a55-b16d-ac959dbf86a7\") "
	Aug 17 03:18:02 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:02.059173     663 reconciler.go:157] "Reconciler: start to sync state"
	Aug 17 03:18:03 newest-cni-20210817031538-1554185 kubelet[663]: E0817 03:18:03.098059     663 summary_sys_containers.go:47] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Aug 17 03:18:03 newest-cni-20210817031538-1554185 kubelet[663]: E0817 03:18:03.098131     663 helpers.go:673] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Aug 17 03:18:05 newest-cni-20210817031538-1554185 kubelet[663]: I0817 03:18:05.557833     663 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 17 03:18:05 newest-cni-20210817031538-1554185 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 17 03:18:05 newest-cni-20210817031538-1554185 systemd[1]: kubelet.service: Succeeded.
	Aug 17 03:18:05 newest-cni-20210817031538-1554185 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 03:18:29.491751 1761420 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (24.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (368.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p cilium-20210817024631-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p cilium-20210817024631-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: signal: killed (6m8.792567537s)

                                                
                                                
-- stdout --
	* [cilium-20210817024631-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node cilium-20210817024631-1554185 in cluster cilium-20210817024631-1554185
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 03:20:22.836413 1768710 out.go:298] Setting OutFile to fd 1 ...
	I0817 03:20:22.836515 1768710 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:20:22.836527 1768710 out.go:311] Setting ErrFile to fd 2...
	I0817 03:20:22.836530 1768710 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 03:20:22.836666 1768710 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 03:20:22.836932 1768710 out.go:305] Setting JSON to false
	I0817 03:20:22.837859 1768710 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39761,"bootTime":1629130662,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 03:20:22.837930 1768710 start.go:121] virtualization:  
	I0817 03:20:22.841211 1768710 out.go:177] * [cilium-20210817024631-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 03:20:22.842861 1768710 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 03:20:22.844753 1768710 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:20:22.843809 1768710 notify.go:169] Checking for updates...
	I0817 03:20:22.846361 1768710 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 03:20:22.847850 1768710 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 03:20:22.848342 1768710 config.go:177] Loaded profile config "old-k8s-version-20210817024805-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0817 03:20:22.848385 1768710 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 03:20:22.887427 1768710 docker.go:132] docker version: linux-20.10.8
	I0817 03:20:22.887501 1768710 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:20:22.982591 1768710 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:20:22.924269928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 03:20:22.982686 1768710 docker.go:244] overlay module found
	I0817 03:20:22.984758 1768710 out.go:177] * Using the docker driver based on user configuration
	I0817 03:20:22.984779 1768710 start.go:278] selected driver: docker
	I0817 03:20:22.984790 1768710 start.go:751] validating driver "docker" against <nil>
	I0817 03:20:22.984804 1768710 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 03:20:22.984843 1768710 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 03:20:22.984857 1768710 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0817 03:20:22.986500 1768710 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 03:20:22.986774 1768710 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 03:20:23.064495 1768710 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 03:20:23.013717142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 03:20:23.064625 1768710 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0817 03:20:23.064788 1768710 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 03:20:23.064817 1768710 cni.go:93] Creating CNI manager for "cilium"
	I0817 03:20:23.064828 1768710 start_flags.go:272] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0817 03:20:23.064838 1768710 start_flags.go:277] config:
	{Name:cilium-20210817024631-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210817024631-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:20:23.066737 1768710 out.go:177] * Starting control plane node cilium-20210817024631-1554185 in cluster cilium-20210817024631-1554185
	I0817 03:20:23.066765 1768710 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 03:20:23.068349 1768710 out.go:177] * Pulling base image ...
	I0817 03:20:23.068370 1768710 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 03:20:23.068396 1768710 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 03:20:23.068408 1768710 cache.go:56] Caching tarball of preloaded images
	I0817 03:20:23.068530 1768710 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0817 03:20:23.068550 1768710 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0817 03:20:23.068642 1768710 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/config.json ...
	I0817 03:20:23.068664 1768710 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/config.json: {Name:mk94c327332fb696485f075094bc8230be0afbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:20:23.068800 1768710 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 03:20:23.111533 1768710 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 03:20:23.111554 1768710 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 03:20:23.111568 1768710 cache.go:205] Successfully downloaded all kic artifacts
	I0817 03:20:23.111599 1768710 start.go:313] acquiring machines lock for cilium-20210817024631-1554185: {Name:mk97f932f54458a22437a38c3a00eeb95e17ef3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 03:20:23.111710 1768710 start.go:317] acquired machines lock for "cilium-20210817024631-1554185" in 90.107µs
	I0817 03:20:23.111736 1768710 start.go:89] Provisioning new machine with config: &{Name:cilium-20210817024631-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210817024631-1554185 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 03:20:23.111815 1768710 start.go:126] createHost starting for "" (driver="docker")
	I0817 03:20:23.115096 1768710 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0817 03:20:23.115323 1768710 start.go:160] libmachine.API.Create for "cilium-20210817024631-1554185" (driver="docker")
	I0817 03:20:23.115350 1768710 client.go:168] LocalClient.Create starting
	I0817 03:20:23.115423 1768710 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0817 03:20:23.115483 1768710 main.go:130] libmachine: Decoding PEM data...
	I0817 03:20:23.115504 1768710 main.go:130] libmachine: Parsing certificate...
	I0817 03:20:23.115604 1768710 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0817 03:20:23.115625 1768710 main.go:130] libmachine: Decoding PEM data...
	I0817 03:20:23.115640 1768710 main.go:130] libmachine: Parsing certificate...
	I0817 03:20:23.116000 1768710 cli_runner.go:115] Run: docker network inspect cilium-20210817024631-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 03:20:23.144976 1768710 cli_runner.go:162] docker network inspect cilium-20210817024631-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 03:20:23.145037 1768710 network_create.go:255] running [docker network inspect cilium-20210817024631-1554185] to gather additional debugging logs...
	I0817 03:20:23.145054 1768710 cli_runner.go:115] Run: docker network inspect cilium-20210817024631-1554185
	W0817 03:20:23.173651 1768710 cli_runner.go:162] docker network inspect cilium-20210817024631-1554185 returned with exit code 1
	I0817 03:20:23.173674 1768710 network_create.go:258] error running [docker network inspect cilium-20210817024631-1554185]: docker network inspect cilium-20210817024631-1554185: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20210817024631-1554185
	I0817 03:20:23.173686 1768710 network_create.go:260] output of [docker network inspect cilium-20210817024631-1554185]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20210817024631-1554185
	
	** /stderr **
	I0817 03:20:23.173732 1768710 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 03:20:23.202705 1768710 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x4000b86888] misses:0}
	I0817 03:20:23.202750 1768710 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 03:20:23.202773 1768710 network_create.go:106] attempt to create docker network cilium-20210817024631-1554185 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 03:20:23.202830 1768710 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20210817024631-1554185
	I0817 03:20:23.285230 1768710 network_create.go:90] docker network cilium-20210817024631-1554185 192.168.49.0/24 created
	I0817 03:20:23.285256 1768710 kic.go:106] calculated static IP "192.168.49.2" for the "cilium-20210817024631-1554185" container
	I0817 03:20:23.285321 1768710 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0817 03:20:23.325132 1768710 cli_runner.go:115] Run: docker volume create cilium-20210817024631-1554185 --label name.minikube.sigs.k8s.io=cilium-20210817024631-1554185 --label created_by.minikube.sigs.k8s.io=true
	I0817 03:20:23.370980 1768710 oci.go:102] Successfully created a docker volume cilium-20210817024631-1554185
	I0817 03:20:23.371054 1768710 cli_runner.go:115] Run: docker run --rm --name cilium-20210817024631-1554185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210817024631-1554185 --entrypoint /usr/bin/test -v cilium-20210817024631-1554185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0817 03:20:23.945730 1768710 oci.go:106] Successfully prepared a docker volume cilium-20210817024631-1554185
	W0817 03:20:23.945770 1768710 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0817 03:20:23.945778 1768710 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0817 03:20:23.945840 1768710 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 03:20:23.946127 1768710 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 03:20:23.946353 1768710 kic.go:179] Starting extracting preloaded images to volume ...
	I0817 03:20:23.946411 1768710 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cilium-20210817024631-1554185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 03:20:24.068251 1768710 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20210817024631-1554185 --name cilium-20210817024631-1554185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210817024631-1554185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20210817024631-1554185 --network cilium-20210817024631-1554185 --ip 192.168.49.2 --volume cilium-20210817024631-1554185:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0817 03:20:24.682355 1768710 cli_runner.go:115] Run: docker container inspect cilium-20210817024631-1554185 --format={{.State.Running}}
	I0817 03:20:24.740128 1768710 cli_runner.go:115] Run: docker container inspect cilium-20210817024631-1554185 --format={{.State.Status}}
	I0817 03:20:24.780769 1768710 cli_runner.go:115] Run: docker exec cilium-20210817024631-1554185 stat /var/lib/dpkg/alternatives/iptables
	I0817 03:20:24.894320 1768710 oci.go:278] the created container "cilium-20210817024631-1554185" has a running status.
	I0817 03:20:24.894349 1768710 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/cilium-20210817024631-1554185/id_rsa...
	I0817 03:20:25.490691 1768710 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/cilium-20210817024631-1554185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 03:20:25.671477 1768710 cli_runner.go:115] Run: docker container inspect cilium-20210817024631-1554185 --format={{.State.Status}}
	I0817 03:20:25.726074 1768710 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 03:20:25.726089 1768710 kic_runner.go:115] Args: [docker exec --privileged cilium-20210817024631-1554185 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 03:20:33.511222 1768710 kic_runner.go:124] Done: [docker exec --privileged cilium-20210817024631-1554185 chown docker:docker /home/docker/.ssh/authorized_keys]: (7.785110797s)
	I0817 03:20:38.010602 1768710 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cilium-20210817024631-1554185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: (14.06413297s)
	I0817 03:20:38.010631 1768710 kic.go:188] duration metric: took 14.064276 seconds to extract preloaded images to volume
	I0817 03:20:38.010702 1768710 cli_runner.go:115] Run: docker container inspect cilium-20210817024631-1554185 --format={{.State.Status}}
	I0817 03:20:38.061887 1768710 machine.go:88] provisioning docker machine ...
	I0817 03:20:38.061919 1768710 ubuntu.go:169] provisioning hostname "cilium-20210817024631-1554185"
	I0817 03:20:38.061969 1768710 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210817024631-1554185
	I0817 03:20:38.124162 1768710 main.go:130] libmachine: Using SSH client type: native
	I0817 03:20:38.124431 1768710 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50513 <nil> <nil>}
	I0817 03:20:38.124453 1768710 main.go:130] libmachine: About to run SSH command:
	sudo hostname cilium-20210817024631-1554185 && echo "cilium-20210817024631-1554185" | sudo tee /etc/hostname
	I0817 03:20:38.263340 1768710 main.go:130] libmachine: SSH cmd err, output: <nil>: cilium-20210817024631-1554185
	
	I0817 03:20:38.263403 1768710 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210817024631-1554185
	I0817 03:20:38.346134 1768710 main.go:130] libmachine: Using SSH client type: native
	I0817 03:20:38.346288 1768710 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370c80] 0x370c50 <nil>  [] 0s} 127.0.0.1 50513 <nil> <nil>}
	I0817 03:20:38.346309 1768710 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20210817024631-1554185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20210817024631-1554185/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20210817024631-1554185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 03:20:38.506251 1768710 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 03:20:38.506272 1768710 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89
122b1a/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0817 03:20:38.506292 1768710 ubuntu.go:177] setting up certificates
	I0817 03:20:38.506300 1768710 provision.go:83] configureAuth start
	I0817 03:20:38.506346 1768710 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210817024631-1554185
	I0817 03:20:38.569323 1768710 provision.go:138] copyHostCerts
	I0817 03:20:38.569371 1768710 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0817 03:20:38.569378 1768710 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0817 03:20:38.569426 1768710 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0817 03:20:38.569490 1768710 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0817 03:20:38.569496 1768710 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0817 03:20:38.569517 1768710 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0817 03:20:38.569558 1768710 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0817 03:20:38.569563 1768710 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0817 03:20:38.569583 1768710 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1679 bytes)
	I0817 03:20:38.569615 1768710 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.cilium-20210817024631-1554185 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20210817024631-1554185]
	I0817 03:20:39.006907 1768710 provision.go:172] copyRemoteCerts
	I0817 03:20:39.006989 1768710 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 03:20:39.007035 1768710 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210817024631-1554185
	I0817 03:20:39.046159 1768710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50513 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/cilium-20210817024631-1554185/id_rsa Username:docker}
	I0817 03:20:39.128828 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 03:20:39.143799 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0817 03:20:39.159159 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 03:20:39.173736 1768710 provision.go:86] duration metric: configureAuth took 667.428199ms
	I0817 03:20:39.173752 1768710 ubuntu.go:193] setting minikube options for container-runtime
	I0817 03:20:39.173884 1768710 config.go:177] Loaded profile config "cilium-20210817024631-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 03:20:39.173891 1768710 machine.go:91] provisioned docker machine in 1.111983312s
	I0817 03:20:39.173896 1768710 client.go:171] LocalClient.Create took 16.058535554s
	I0817 03:20:39.173905 1768710 start.go:168] duration metric: libmachine.API.Create for "cilium-20210817024631-1554185" took 16.058581281s
	I0817 03:20:39.173911 1768710 start.go:267] post-start starting for "cilium-20210817024631-1554185" (driver="docker")
	I0817 03:20:39.173915 1768710 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 03:20:39.173967 1768710 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 03:20:39.174004 1768710 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210817024631-1554185
	I0817 03:20:39.204952 1768710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50513 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/cilium-20210817024631-1554185/id_rsa Username:docker}
	I0817 03:20:39.288929 1768710 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 03:20:39.291193 1768710 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 03:20:39.291217 1768710 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 03:20:39.291232 1768710 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 03:20:39.291244 1768710 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 03:20:39.291253 1768710 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0817 03:20:39.291298 1768710 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0817 03:20:39.291396 1768710 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem -> 15541852.pem in /etc/ssl/certs
	I0817 03:20:39.291490 1768710 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 03:20:39.297060 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:20:39.311812 1768710 start.go:270] post-start completed in 137.890533ms
	I0817 03:20:39.312117 1768710 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210817024631-1554185
	I0817 03:20:39.342258 1768710 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/config.json ...
	I0817 03:20:39.342441 1768710 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 03:20:39.342485 1768710 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210817024631-1554185
	I0817 03:20:39.372118 1768710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50513 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/cilium-20210817024631-1554185/id_rsa Username:docker}
	I0817 03:20:39.454214 1768710 start.go:129] duration metric: createHost completed in 16.342388917s
	I0817 03:20:39.454231 1768710 start.go:80] releasing machines lock for "cilium-20210817024631-1554185", held for 16.342507693s
	I0817 03:20:39.454294 1768710 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210817024631-1554185
	I0817 03:20:39.484155 1768710 ssh_runner.go:149] Run: systemctl --version
	I0817 03:20:39.484211 1768710 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210817024631-1554185
	I0817 03:20:39.484426 1768710 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 03:20:39.484481 1768710 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210817024631-1554185
	I0817 03:20:39.525173 1768710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50513 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/cilium-20210817024631-1554185/id_rsa Username:docker}
	I0817 03:20:39.531048 1768710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50513 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/cilium-20210817024631-1554185/id_rsa Username:docker}
	I0817 03:20:39.736607 1768710 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0817 03:20:39.747515 1768710 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 03:20:39.756916 1768710 docker.go:153] disabling docker service ...
	I0817 03:20:39.756960 1768710 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0817 03:20:39.775465 1768710 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0817 03:20:39.785961 1768710 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0817 03:20:39.880102 1768710 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0817 03:20:39.983657 1768710 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0817 03:20:39.993488 1768710 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 03:20:40.011399 1768710 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY29udGFpbmV
yZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICB
jb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0817 03:20:40.023701 1768710 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 03:20:40.029351 1768710 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 03:20:40.034915 1768710 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 03:20:40.128648 1768710 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0817 03:20:40.217624 1768710 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0817 03:20:40.217723 1768710 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0817 03:20:40.220851 1768710 start.go:413] Will wait 60s for crictl version
	I0817 03:20:40.220892 1768710 ssh_runner.go:149] Run: sudo crictl version
	I0817 03:20:40.298875 1768710 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.9
	RuntimeApiVersion:  v1alpha2
	I0817 03:20:40.298925 1768710 ssh_runner.go:149] Run: containerd --version
	I0817 03:20:40.331068 1768710 ssh_runner.go:149] Run: containerd --version
	I0817 03:20:40.366588 1768710 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0817 03:20:40.366655 1768710 cli_runner.go:115] Run: docker network inspect cilium-20210817024631-1554185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 03:20:40.399220 1768710 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 03:20:40.402028 1768710 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:20:40.412894 1768710 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 03:20:40.412955 1768710 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:20:40.437835 1768710 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:20:40.437864 1768710 containerd.go:517] Images already preloaded, skipping extraction
	I0817 03:20:40.437900 1768710 ssh_runner.go:149] Run: sudo crictl images --output json
	I0817 03:20:40.468864 1768710 containerd.go:613] all images are preloaded for containerd runtime.
	I0817 03:20:40.468881 1768710 cache_images.go:74] Images are preloaded, skipping loading
	I0817 03:20:40.468927 1768710 ssh_runner.go:149] Run: sudo crictl info
	I0817 03:20:40.496159 1768710 cni.go:93] Creating CNI manager for "cilium"
	I0817 03:20:40.496184 1768710 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 03:20:40.496198 1768710 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20210817024631-1554185 NodeName:cilium-20210817024631-1554185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 03:20:40.496331 1768710 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "cilium-20210817024631-1554185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 03:20:40.496418 1768710 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=cilium-20210817024631-1554185 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:cilium-20210817024631-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0817 03:20:40.496469 1768710 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 03:20:40.503093 1768710 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 03:20:40.503146 1768710 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 03:20:40.510418 1768710 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (543 bytes)
	I0817 03:20:40.523527 1768710 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 03:20:40.536492 1768710 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0817 03:20:40.548854 1768710 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 03:20:40.553361 1768710 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 03:20:40.567326 1768710 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185 for IP: 192.168.49.2
	I0817 03:20:40.567381 1768710 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0817 03:20:40.567399 1768710 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0817 03:20:40.567454 1768710 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/client.key
	I0817 03:20:40.567464 1768710 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/client.crt with IP's: []
	I0817 03:20:40.827872 1768710 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/client.crt ...
	I0817 03:20:40.827897 1768710 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/client.crt: {Name:mk9e0888f475b3b9ed51ad1d6c6f8b38636ef497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:20:40.828066 1768710 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/client.key ...
	I0817 03:20:40.828082 1768710 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/client.key: {Name:mkac39ffbbfc1c9ff5bbb9708f5419a7846a0db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:20:40.828188 1768710 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/apiserver.key.dd3b5fb2
	I0817 03:20:40.828200 1768710 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 03:20:41.202239 1768710 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/apiserver.crt.dd3b5fb2 ...
	I0817 03:20:41.202264 1768710 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/apiserver.crt.dd3b5fb2: {Name:mkae643e83ab1f733c18ddeab33f3239566b0006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:20:41.202448 1768710 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/apiserver.key.dd3b5fb2 ...
	I0817 03:20:41.202465 1768710 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/apiserver.key.dd3b5fb2: {Name:mka06c966dc83f7713d3f7157d0280a38bbf2985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:20:41.202571 1768710 certs.go:308] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/apiserver.crt
	I0817 03:20:41.202631 1768710 certs.go:312] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/apiserver.key
	I0817 03:20:41.202679 1768710 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/proxy-client.key
	I0817 03:20:41.202688 1768710 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/proxy-client.crt with IP's: []
	I0817 03:20:41.586382 1768710 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/proxy-client.crt ...
	I0817 03:20:41.586409 1768710 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/proxy-client.crt: {Name:mk9c849986588ee3ca09015dd1d51f8515d1fda5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:20:41.586581 1768710 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/proxy-client.key ...
	I0817 03:20:41.586599 1768710 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/proxy-client.key: {Name:mkae9f7e79e353140d22c583ade51a0020d42843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:20:41.587742 1768710 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem (1338 bytes)
	W0817 03:20:41.587813 1768710 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185_empty.pem, impossibly tiny 0 bytes
	I0817 03:20:41.587839 1768710 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 03:20:41.587896 1768710 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0817 03:20:41.587936 1768710 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0817 03:20:41.587989 1768710 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1679 bytes)
	I0817 03:20:41.588071 1768710 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem (1708 bytes)
	I0817 03:20:41.589157 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 03:20:41.610187 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 03:20:41.626895 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 03:20:41.643360 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210817024631-1554185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 03:20:41.660077 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 03:20:41.677518 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 03:20:41.695267 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 03:20:41.752485 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 03:20:41.769990 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/15541852.pem --> /usr/share/ca-certificates/15541852.pem (1708 bytes)
	I0817 03:20:41.793330 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 03:20:41.863864 1768710 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/1554185.pem --> /usr/share/ca-certificates/1554185.pem (1338 bytes)
	I0817 03:20:41.883391 1768710 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 03:20:41.895318 1768710 ssh_runner.go:149] Run: openssl version
	I0817 03:20:41.899628 1768710 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15541852.pem && ln -fs /usr/share/ca-certificates/15541852.pem /etc/ssl/certs/15541852.pem"
	I0817 03:20:41.906211 1768710 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/15541852.pem
	I0817 03:20:41.909149 1768710 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 17 02:10 /usr/share/ca-certificates/15541852.pem
	I0817 03:20:41.909202 1768710 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15541852.pem
	I0817 03:20:41.914966 1768710 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15541852.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 03:20:41.923789 1768710 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 03:20:41.932800 1768710 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:20:41.939676 1768710 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 17 01:51 /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:20:41.939729 1768710 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 03:20:41.945232 1768710 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 03:20:41.955207 1768710 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554185.pem && ln -fs /usr/share/ca-certificates/1554185.pem /etc/ssl/certs/1554185.pem"
	I0817 03:20:41.962467 1768710 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1554185.pem
	I0817 03:20:41.965165 1768710 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 17 02:10 /usr/share/ca-certificates/1554185.pem
	I0817 03:20:41.965286 1768710 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554185.pem
	I0817 03:20:41.971420 1768710 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554185.pem /etc/ssl/certs/51391683.0"
	I0817 03:20:41.981161 1768710 kubeadm.go:390] StartCluster: {Name:cilium-20210817024631-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210817024631-1554185 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 03:20:41.981239 1768710 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0817 03:20:41.981280 1768710 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 03:20:42.030969 1768710 cri.go:76] found id: ""
	I0817 03:20:42.031017 1768710 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 03:20:42.044978 1768710 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 03:20:42.052472 1768710 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 03:20:42.052514 1768710 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 03:20:42.074202 1768710 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 03:20:42.074243 1768710 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 03:20:42.739091 1768710 out.go:204]   - Generating certificates and keys ...
	I0817 03:20:50.387151 1768710 out.go:204]   - Booting up control plane ...
	I0817 03:21:09.951813 1768710 out.go:204]   - Configuring RBAC rules ...
	I0817 03:21:10.489867 1768710 cni.go:93] Creating CNI manager for "cilium"
	I0817 03:21:10.491476 1768710 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I0817 03:21:10.491531 1768710 ssh_runner.go:149] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0817 03:21:10.538944 1768710 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 03:21:10.538960 1768710 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (18465 bytes)
	I0817 03:21:10.562297 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 03:21:11.525776 1768710 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 03:21:11.525887 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:11.525953 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=cilium-20210817024631-1554185 minikube.k8s.io/updated_at=2021_08_17T03_21_11_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:11.554760 1768710 ops.go:34] apiserver oom_adj: -16
	I0817 03:21:11.794785 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:12.405660 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:12.905510 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:13.405367 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:13.905371 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:14.405816 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:14.905354 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:15.405365 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:15.905993 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:16.405360 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:16.905744 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:17.405355 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:17.905978 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:18.405367 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:18.905379 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:19.405681 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:19.906348 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:20.405339 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:20.906271 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:21.406201 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:21.905988 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:22.405377 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:22.905745 1768710 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 03:21:23.035845 1768710 kubeadm.go:985] duration metric: took 11.509997563s to wait for elevateKubeSystemPrivileges.
	I0817 03:21:23.035871 1768710 kubeadm.go:392] StartCluster complete in 41.054715117s
	I0817 03:21:23.035887 1768710 settings.go:142] acquiring lock: {Name:mkb810a2a16a8fb7ee64a61f408e72dc46d2d721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:21:23.035972 1768710 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 03:21:23.036951 1768710 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk39cf21a1be8da23ccc5639436470178cae5630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 03:21:23.558468 1768710 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20210817024631-1554185" rescaled to 1
	I0817 03:21:23.558520 1768710 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 03:21:23.560060 1768710 out.go:177] * Verifying Kubernetes components...
	I0817 03:21:23.560109 1768710 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 03:21:23.558608 1768710 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 03:21:23.558797 1768710 config.go:177] Loaded profile config "cilium-20210817024631-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 03:21:23.558832 1768710 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 03:21:23.560225 1768710 addons.go:59] Setting storage-provisioner=true in profile "cilium-20210817024631-1554185"
	I0817 03:21:23.560239 1768710 addons.go:135] Setting addon storage-provisioner=true in "cilium-20210817024631-1554185"
	W0817 03:21:23.560245 1768710 addons.go:147] addon storage-provisioner should already be in state true
	I0817 03:21:23.560267 1768710 host.go:66] Checking if "cilium-20210817024631-1554185" exists ...
	I0817 03:21:23.560272 1768710 addons.go:59] Setting default-storageclass=true in profile "cilium-20210817024631-1554185"
	I0817 03:21:23.560286 1768710 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20210817024631-1554185"
	I0817 03:21:23.560586 1768710 cli_runner.go:115] Run: docker container inspect cilium-20210817024631-1554185 --format={{.State.Status}}
	I0817 03:21:23.560795 1768710 cli_runner.go:115] Run: docker container inspect cilium-20210817024631-1554185 --format={{.State.Status}}
	I0817 03:21:23.627627 1768710 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 03:21:23.627732 1768710 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:21:23.627740 1768710 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 03:21:23.627787 1768710 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210817024631-1554185
	I0817 03:21:23.679565 1768710 addons.go:135] Setting addon default-storageclass=true in "cilium-20210817024631-1554185"
	W0817 03:21:23.679590 1768710 addons.go:147] addon default-storageclass should already be in state true
	I0817 03:21:23.679613 1768710 host.go:66] Checking if "cilium-20210817024631-1554185" exists ...
	I0817 03:21:23.680057 1768710 cli_runner.go:115] Run: docker container inspect cilium-20210817024631-1554185 --format={{.State.Status}}
	I0817 03:21:23.698901 1768710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50513 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/cilium-20210817024631-1554185/id_rsa Username:docker}
	I0817 03:21:23.746807 1768710 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 03:21:23.746831 1768710 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 03:21:23.746880 1768710 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210817024631-1554185
	I0817 03:21:23.797323 1768710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50513 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/cilium-20210817024631-1554185/id_rsa Username:docker}
	I0817 03:21:23.857577 1768710 node_ready.go:35] waiting up to 5m0s for node "cilium-20210817024631-1554185" to be "Ready" ...
	I0817 03:21:23.857915 1768710 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 03:21:23.860808 1768710 node_ready.go:49] node "cilium-20210817024631-1554185" has status "Ready":"True"
	I0817 03:21:23.860833 1768710 node_ready.go:38] duration metric: took 3.220333ms waiting for node "cilium-20210817024631-1554185" to be "Ready" ...
	I0817 03:21:23.860842 1768710 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 03:21:23.872764 1768710 pod_ready.go:78] waiting up to 5m0s for pod "cilium-kbksq" in "kube-system" namespace to be "Ready" ...
	I0817 03:21:23.951465 1768710 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 03:21:23.991445 1768710 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 03:21:24.742681 1768710 start.go:728] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0817 03:21:24.800400 1768710 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 03:21:24.800428 1768710 addons.go:344] enableAddons completed in 1.241617536s
	I0817 03:21:25.888628 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:27.889291 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:30.437829 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:32.896403 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:35.388390 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:37.503832 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:39.887977 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:42.387825 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:44.897006 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:47.387185 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:49.387340 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:51.387593 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:53.387902 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:55.890082 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:21:58.387917 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:00.388591 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:02.393752 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:04.886102 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:06.886519 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:08.887180 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:10.895118 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:13.387354 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:15.388712 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:17.887375 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:20.387247 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:22.387478 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:24.393341 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:26.886762 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:29.386643 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:31.387352 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:33.390929 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:35.887547 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:37.890606 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:40.397226 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:42.887601 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:44.887899 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:47.540232 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:49.886988 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:51.887108 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:54.394231 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:56.887069 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:22:59.386537 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:01.387792 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:03.887118 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:05.887487 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:07.890481 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:09.893425 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:12.398980 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:14.891296 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:16.892300 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:19.387115 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:21.887497 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:24.387322 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:26.388394 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:28.389658 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:30.888266 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:33.396129 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:35.886552 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:38.387307 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:40.887593 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:43.387513 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:45.387837 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:47.891433 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:50.400836 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:52.887495 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:54.891793 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:57.387096 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:23:59.387340 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:01.387724 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:03.387825 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:05.886846 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:07.887456 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:10.386930 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:12.387776 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:14.886532 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:16.886776 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:19.386969 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:21.387078 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:23.392791 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:25.894314 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:28.388238 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:30.504394 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:32.937057 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:35.391760 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:37.887501 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:39.888953 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:42.389469 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:44.886761 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:46.887370 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:49.388084 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:51.886483 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:53.886631 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:55.887522 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:24:58.391383 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:00.401582 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:02.887224 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:04.887283 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:07.386270 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:09.387354 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:11.886249 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:13.887185 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:15.887984 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:17.888129 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:20.387807 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:22.887584 1768710 pod_ready.go:102] pod "cilium-kbksq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:23.892881 1768710 pod_ready.go:81] duration metric: took 4m0.02009012s waiting for pod "cilium-kbksq" in "kube-system" namespace to be "Ready" ...
	E0817 03:25:23.892898 1768710 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0817 03:25:23.892906 1768710 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace to be "Ready" ...
	I0817 03:25:25.903641 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:28.402507 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:30.402782 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:32.403984 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:34.902427 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:36.913234 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:39.402217 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:41.403019 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:43.902840 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:46.402692 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:48.902473 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:51.403168 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:53.902561 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:55.902633 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:25:58.405646 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:00.902755 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:02.902838 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:05.402612 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:07.902390 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:09.902793 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:11.908694 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:14.404655 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:16.902029 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:18.902218 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:21.023675 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:23.401995 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:25.402106 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:27.402315 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"
	I0817 03:26:29.404425 1768710 pod_ready.go:102] pod "cilium-operator-99d899fb5-snfcq" in "kube-system" namespace has status "Ready":"False"

                                                
                                                
** /stderr **
net_test.go:100: failed start: signal: killed
--- FAIL: TestNetworkPlugins/group/cilium/Start (368.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (600.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-4724n" [76afc60c-a95d-4009-bad7-80d26bc7e13d] Running
E0817 03:26:31.048452 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dkindnet": context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:26:42.938387 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:27:19.191870 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
E0817 03:27:19.197095 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
E0817 03:27:19.207316 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
E0817 03:27:19.227669 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
E0817 03:27:19.267870 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
E0817 03:27:19.348098 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
E0817 03:27:19.508387 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
E0817 03:27:19.828857 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:27:20.469868 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:27:21.750038 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:27:24.310889 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:27:29.431393 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:27:39.671588 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:27:52.969236 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:28:00.152555 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:28:04.858535 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:28:14.884371 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:28:31.847229 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:28:41.112954 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:29:13.260855 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
E0817 03:29:13.266113 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
E0817 03:29:13.276316 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
E0817 03:29:13.296511 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
E0817 03:29:13.336723 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
E0817 03:29:13.416954 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
E0817 03:29:13.577283 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
E0817 03:29:13.897760 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:29:14.538763 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
E0817 03:29:14.631084 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:29:15.819333 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:29:18.379891 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:29:23.501026 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:29:33.741150 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:29:54.221853 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
E0817 03:29:54.898928 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:30:03.033993 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:30:09.127025 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:30:21.018849 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:30:35.183024 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:30:36.809406 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:30:48.699158 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:30:55.535416 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:31:57.103634 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:32:19.192622 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:32:46.874174 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:33:14.883730 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:33:31.847506 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:33:58.577146 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:34:13.260844 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:34:14.631327 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:34:40.943807 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210817024631-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:35:09.126719 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:35:21.018882 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:35:37.675727 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:35:55.534949 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
E0817 03:36:17.930684 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
helpers_test.go:328: TestNetworkPlugins/group/kindnet/ControllerPod: WARNING: pod list for "kube-system" "app=kindnet" returned: context deadline exceeded
net_test.go:106: ***** TestNetworkPlugins/group/kindnet/ControllerPod: pod "app=kindnet" failed to start within 10m0s: timed out waiting for the condition ****
net_test.go:106: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kindnet-20210817024631-1554185 -n kindnet-20210817024631-1554185
net_test.go:106: TestNetworkPlugins/group/kindnet/ControllerPod: showing logs for failed pods as of 2021-08-17 03:36:30.386109823 +0000 UTC m=+6421.180857607
net_test.go:106: (dbg) Run:  kubectl --context kindnet-20210817024631-1554185 describe po kindnet-4724n -n kube-system
net_test.go:106: (dbg) Non-zero exit: kubectl --context kindnet-20210817024631-1554185 describe po kindnet-4724n -n kube-system: context deadline exceeded (1.444µs)
net_test.go:106: kubectl --context kindnet-20210817024631-1554185 describe po kindnet-4724n -n kube-system: context deadline exceeded
net_test.go:106: (dbg) Run:  kubectl --context kindnet-20210817024631-1554185 logs kindnet-4724n -n kube-system
net_test.go:106: (dbg) Non-zero exit: kubectl --context kindnet-20210817024631-1554185 logs kindnet-4724n -n kube-system: context deadline exceeded (246ns)
net_test.go:106: kubectl --context kindnet-20210817024631-1554185 logs kindnet-4724n -n kube-system: context deadline exceeded
net_test.go:107: failed waiting for app=kindnet labeled pod: app=kindnet within 10m0s: timed out waiting for the condition
--- FAIL: TestNetworkPlugins/group/kindnet/ControllerPod (600.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-20210817024630-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p enable-default-cni-20210817024630-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: context deadline exceeded (689ns)
net_test.go:100: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-20210817024630-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p bridge-20210817024630-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: context deadline exceeded (788ns)
net_test.go:100: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/bridge/Start (0.00s)

                                                
                                    

Test pass (185/241)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 16.26
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.07
10 TestDownloadOnly/v1.21.3/json-events 21.61
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.07
17 TestDownloadOnly/v1.22.0-rc.0/json-events 20.92
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.32
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.2
31 TestAddons/parallel/MetricsServer 5.72
35 TestAddons/parallel/GCPAuth 35.89
36 TestCertOptions 84.37
38 TestForceSystemdFlag 56.51
39 TestForceSystemdEnv 101.12
44 TestErrorSpam/setup 61.53
45 TestErrorSpam/start 0.87
46 TestErrorSpam/status 0.93
47 TestErrorSpam/pause 6.66
48 TestErrorSpam/unpause 1.47
49 TestErrorSpam/stop 23.08
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 121.43
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 16.06
56 TestFunctional/serial/KubeContext 0.06
57 TestFunctional/serial/KubectlGetPods 0.28
60 TestFunctional/serial/CacheCmd/cache/add_remote 6.15
61 TestFunctional/serial/CacheCmd/cache/add_local 1.11
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
63 TestFunctional/serial/CacheCmd/cache/list 0.07
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.48
66 TestFunctional/serial/CacheCmd/cache/delete 0.14
67 TestFunctional/serial/MinikubeKubectlCmd 0.4
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
69 TestFunctional/serial/ExtraConfig 46.62
70 TestFunctional/serial/ComponentHealth 0.1
71 TestFunctional/serial/LogsCmd 1.23
72 TestFunctional/serial/LogsFileCmd 1.16
74 TestFunctional/parallel/ConfigCmd 0.52
75 TestFunctional/parallel/DashboardCmd 2.66
76 TestFunctional/parallel/DryRun 0.5
77 TestFunctional/parallel/InternationalLanguage 0.21
78 TestFunctional/parallel/StatusCmd 0.94
81 TestFunctional/parallel/ServiceCmd 12.67
82 TestFunctional/parallel/AddonsCmd 0.16
85 TestFunctional/parallel/SSHCmd 0.68
86 TestFunctional/parallel/CpCmd 0.58
88 TestFunctional/parallel/FileSync 0.36
89 TestFunctional/parallel/CertSync 1.84
93 TestFunctional/parallel/NodeLabels 0.14
94 TestFunctional/parallel/LoadImage 1.77
95 TestFunctional/parallel/RemoveImage 2.05
96 TestFunctional/parallel/LoadImageFromFile 1.08
98 TestFunctional/parallel/ListImages 0.29
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
101 TestFunctional/parallel/Version/short 0.06
102 TestFunctional/parallel/Version/components 1.04
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
107 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
109 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
110 TestFunctional/parallel/ProfileCmd/profile_list 0.35
111 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
117 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
118 TestFunctional/parallel/MountCmd/specific-port 1.59
119 TestFunctional/delete_busybox_image 0.07
120 TestFunctional/delete_my-image_image 0.03
121 TestFunctional/delete_minikube_cached_images 0.03
125 TestJSONOutput/start/Audit 0
127 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
128 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
130 TestJSONOutput/pause/Audit 0
132 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
133 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
135 TestJSONOutput/unpause/Audit 0
137 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
138 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
140 TestJSONOutput/stop/Audit 0
142 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
143 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
144 TestErrorJSONOutput 0.28
146 TestKicCustomNetwork/create_custom_network 58.02
147 TestKicCustomNetwork/use_default_bridge_network 43.35
148 TestKicExistingNetwork 44.66
149 TestMainNoArgs 0.06
152 TestMultiNode/serial/FreshStart2Nodes 160.84
153 TestMultiNode/serial/DeployApp2Nodes 4.83
154 TestMultiNode/serial/PingHostFrom2Pods 1.06
155 TestMultiNode/serial/AddNode 39.41
156 TestMultiNode/serial/ProfileList 0.3
157 TestMultiNode/serial/CopyFile 2.33
158 TestMultiNode/serial/StopNode 21.17
159 TestMultiNode/serial/StartAfterStop 31.04
160 TestMultiNode/serial/RestartKeepsNodes 201.14
161 TestMultiNode/serial/DeleteNode 24.18
162 TestMultiNode/serial/StopMultiNode 40.28
163 TestMultiNode/serial/RestartMultiNode 100.02
164 TestMultiNode/serial/ValidateNameConflict 73.1
170 TestDebPackageInstall/install_arm64_debian:sid/minikube 0
171 TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver 11.71
173 TestDebPackageInstall/install_arm64_debian:latest/minikube 0
174 TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver 10.31
176 TestDebPackageInstall/install_arm64_debian:10/minikube 0
177 TestDebPackageInstall/install_arm64_debian:10/kvm2-driver 10
179 TestDebPackageInstall/install_arm64_debian:9/minikube 0
180 TestDebPackageInstall/install_arm64_debian:9/kvm2-driver 8.72
182 TestDebPackageInstall/install_arm64_ubuntu:latest/minikube 0
183 TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver 13.11
185 TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube 0
186 TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver 12.47
188 TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube 0
189 TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver 12.66
191 TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube 0
192 TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver 11.36
195 TestScheduledStopUnix 110.66
198 TestInsufficientStorage 22.79
201 TestKubernetesUpgrade 298.5
204 TestPause/serial/Start 127.09
205 TestPause/serial/SecondStartNoReconfiguration 16.9
208 TestPause/serial/Unpause 0.66
210 TestPause/serial/DeletePaused 2.93
211 TestPause/serial/VerifyDeletedResources 3.34
226 TestNetworkPlugins/group/false 0.48
231 TestStartStop/group/old-k8s-version/serial/FirstStart 134.68
233 TestStartStop/group/default-k8s-different-port/serial/FirstStart 122.5
234 TestStartStop/group/old-k8s-version/serial/DeployApp 8.64
235 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.86
236 TestStartStop/group/old-k8s-version/serial/Stop 20.23
237 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
239 TestStartStop/group/default-k8s-different-port/serial/DeployApp 7.78
240 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.98
241 TestStartStop/group/default-k8s-different-port/serial/Stop 20.29
242 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.19
243 TestStartStop/group/default-k8s-different-port/serial/SecondStart 340.87
244 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.02
245 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.12
246 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.28
249 TestStartStop/group/embed-certs/serial/FirstStart 124
250 TestStartStop/group/embed-certs/serial/DeployApp 8.71
251 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
252 TestStartStop/group/embed-certs/serial/Stop 20.24
253 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
254 TestStartStop/group/embed-certs/serial/SecondStart 344.97
256 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
257 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
258 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
261 TestStartStop/group/no-preload/serial/FirstStart 86.01
262 TestStartStop/group/no-preload/serial/DeployApp 9.61
263 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
264 TestStartStop/group/no-preload/serial/Stop 20.31
265 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
266 TestStartStop/group/no-preload/serial/SecondStart 332.94
268 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
269 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.29
270 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
273 TestStartStop/group/newest-cni/serial/FirstStart 84.55
274 TestStartStop/group/newest-cni/serial/DeployApp 0
275 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.64
276 TestStartStop/group/newest-cni/serial/Stop 20.39
277 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
278 TestStartStop/group/newest-cni/serial/SecondStart 40.1
279 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
280 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
281 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
283 TestNetworkPlugins/group/auto/Start 94.82
284 TestNetworkPlugins/group/auto/KubeletFlags 0.29
285 TestNetworkPlugins/group/auto/NetCatPod 10.66
286 TestNetworkPlugins/group/auto/DNS 0.25
287 TestNetworkPlugins/group/auto/Localhost 0.19
288 TestNetworkPlugins/group/auto/HairPin 0.18
290 TestNetworkPlugins/group/calico/Start 98.63
291 TestNetworkPlugins/group/calico/ControllerPod 5.03
292 TestNetworkPlugins/group/calico/KubeletFlags 0.3
293 TestNetworkPlugins/group/calico/NetCatPod 10.64
294 TestNetworkPlugins/group/calico/DNS 0.22
295 TestNetworkPlugins/group/calico/Localhost 0.18
296 TestNetworkPlugins/group/calico/HairPin 0.19
297 TestNetworkPlugins/group/custom-weave/Start 94.08
298 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.29
299 TestNetworkPlugins/group/custom-weave/NetCatPod 10.38
300 TestNetworkPlugins/group/kindnet/Start 123.91
x
+
TestDownloadOnly/v1.14.0/json-events (16.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210817014929-1554185 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210817014929-1554185 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (16.26314867s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (16.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-20210817014929-1554185
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-20210817014929-1554185: exit status 85 (70.680991ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 01:49:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 01:49:29.310146 1554191 out.go:298] Setting OutFile to fd 1 ...
	I0817 01:49:29.310226 1554191 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:49:29.310234 1554191 out.go:311] Setting ErrFile to fd 2...
	I0817 01:49:29.310238 1554191 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:49:29.310376 1554191 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	W0817 01:49:29.310499 1554191 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: no such file or directory
	I0817 01:49:29.310723 1554191 out.go:305] Setting JSON to true
	I0817 01:49:29.311701 1554191 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34308,"bootTime":1629130662,"procs":418,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 01:49:29.311775 1554191 start.go:121] virtualization:  
	I0817 01:49:29.314327 1554191 notify.go:169] Checking for updates...
	I0817 01:49:29.316558 1554191 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 01:49:29.353183 1554191 docker.go:132] docker version: linux-20.10.8
	I0817 01:49:29.353272 1554191 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:49:29.455552 1554191 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:49:29.397020764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:49:29.455675 1554191 docker.go:244] overlay module found
	I0817 01:49:29.457534 1554191 start.go:278] selected driver: docker
	I0817 01:49:29.457549 1554191 start.go:751] validating driver "docker" against <nil>
	I0817 01:49:29.457658 1554191 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:49:29.534957 1554191 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:49:29.483677077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:49:29.535076 1554191 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0817 01:49:29.535335 1554191 start_flags.go:344] Using suggested 2200MB memory alloc based on sys=7845MB, container=7845MB
	I0817 01:49:29.535440 1554191 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0817 01:49:29.535457 1554191 cni.go:93] Creating CNI manager for ""
	I0817 01:49:29.535464 1554191 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:49:29.535472 1554191 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 01:49:29.535480 1554191 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 01:49:29.535485 1554191 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 01:49:29.535492 1554191 start_flags.go:277] config:
	{Name:download-only-20210817014929-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210817014929-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 01:49:29.537607 1554191 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 01:49:29.539544 1554191 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0817 01:49:29.539693 1554191 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 01:49:29.581544 1554191 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 01:49:29.581564 1554191 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 01:49:29.592850 1554191 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4
	I0817 01:49:29.592870 1554191 cache.go:56] Caching tarball of preloaded images
	I0817 01:49:29.593073 1554191 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0817 01:49:29.594942 1554191 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4 ...
	I0817 01:49:29.702185 1554191 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:351eb6ada75b71a92acbf8ac88056f65 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4
	I0817 01:49:42.280641 1554191 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4 ...
	I0817 01:49:42.280727 1554191 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210817014929-1554185"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (21.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210817014929-1554185 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210817014929-1554185 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (21.610137077s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (21.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-20210817014929-1554185
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-20210817014929-1554185: exit status 85 (74.057522ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 01:49:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 01:49:45.653005 1554272 out.go:298] Setting OutFile to fd 1 ...
	I0817 01:49:45.653187 1554272 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:49:45.653197 1554272 out.go:311] Setting ErrFile to fd 2...
	I0817 01:49:45.653200 1554272 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:49:45.653373 1554272 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	W0817 01:49:45.653626 1554272 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: no such file or directory
	I0817 01:49:45.653750 1554272 out.go:305] Setting JSON to true
	I0817 01:49:45.654717 1554272 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34324,"bootTime":1629130662,"procs":416,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 01:49:45.654802 1554272 start.go:121] virtualization:  
	I0817 01:49:45.657150 1554272 notify.go:169] Checking for updates...
	I0817 01:49:45.659430 1554272 config.go:177] Loaded profile config "download-only-20210817014929-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	W0817 01:49:45.659492 1554272 start.go:659] api.Load failed for download-only-20210817014929-1554185: filestore "download-only-20210817014929-1554185": Docker machine "download-only-20210817014929-1554185" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 01:49:45.659540 1554272 driver.go:335] Setting default libvirt URI to qemu:///system
	W0817 01:49:45.659581 1554272 start.go:659] api.Load failed for download-only-20210817014929-1554185: filestore "download-only-20210817014929-1554185": Docker machine "download-only-20210817014929-1554185" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 01:49:45.694733 1554272 docker.go:132] docker version: linux-20.10.8
	I0817 01:49:45.694823 1554272 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:49:45.788111 1554272 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:49:45.721002409 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:49:45.788216 1554272 docker.go:244] overlay module found
	I0817 01:49:45.789979 1554272 start.go:278] selected driver: docker
	I0817 01:49:45.789994 1554272 start.go:751] validating driver "docker" against &{Name:download-only-20210817014929-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210817014929-1554185 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 01:49:45.790230 1554272 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:49:45.869814 1554272 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:49:45.816207428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:49:45.870158 1554272 cni.go:93] Creating CNI manager for ""
	I0817 01:49:45.870175 1554272 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:49:45.870184 1554272 start_flags.go:277] config:
	{Name:download-only-20210817014929-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210817014929-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 01:49:45.872363 1554272 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 01:49:45.873902 1554272 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:49:45.873993 1554272 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 01:49:45.919780 1554272 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 01:49:45.919805 1554272 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 01:49:45.966553 1554272 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 01:49:45.966573 1554272 cache.go:56] Caching tarball of preloaded images
	I0817 01:49:45.966829 1554272 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0817 01:49:45.968989 1554272 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 ...
	I0817 01:49:46.063872 1554272 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:9d640646cc20893f4eeb92367d325250 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0817 01:50:03.764000 1554272 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 ...
	I0817 01:50:03.764080 1554272 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210817014929-1554185"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (20.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210817014929-1554185 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210817014929-1554185 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (20.918256173s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (20.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-20210817014929-1554185
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-20210817014929-1554185: exit status 85 (77.674168ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 01:50:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 01:50:07.338191 1554352 out.go:298] Setting OutFile to fd 1 ...
	I0817 01:50:07.338278 1554352 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:50:07.338283 1554352 out.go:311] Setting ErrFile to fd 2...
	I0817 01:50:07.338287 1554352 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 01:50:07.338406 1554352 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	W0817 01:50:07.338531 1554352 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: no such file or directory
	I0817 01:50:07.338643 1554352 out.go:305] Setting JSON to true
	I0817 01:50:07.339769 1554352 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34346,"bootTime":1629130662,"procs":416,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 01:50:07.339848 1554352 start.go:121] virtualization:  
	I0817 01:50:07.342149 1554352 notify.go:169] Checking for updates...
	I0817 01:50:07.344367 1554352 config.go:177] Loaded profile config "download-only-20210817014929-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	W0817 01:50:07.344443 1554352 start.go:659] api.Load failed for download-only-20210817014929-1554185: filestore "download-only-20210817014929-1554185": Docker machine "download-only-20210817014929-1554185" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 01:50:07.344501 1554352 driver.go:335] Setting default libvirt URI to qemu:///system
	W0817 01:50:07.344539 1554352 start.go:659] api.Load failed for download-only-20210817014929-1554185: filestore "download-only-20210817014929-1554185": Docker machine "download-only-20210817014929-1554185" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 01:50:07.379805 1554352 docker.go:132] docker version: linux-20.10.8
	I0817 01:50:07.379895 1554352 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:50:07.474953 1554352 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:50:07.411961356 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:50:07.475056 1554352 docker.go:244] overlay module found
	I0817 01:50:07.476830 1554352 start.go:278] selected driver: docker
	I0817 01:50:07.476846 1554352 start.go:751] validating driver "docker" against &{Name:download-only-20210817014929-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210817014929-1554185 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 01:50:07.477034 1554352 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 01:50:07.553727 1554352 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-17 01:50:07.502856437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 01:50:07.554051 1554352 cni.go:93] Creating CNI manager for ""
	I0817 01:50:07.554069 1554352 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0817 01:50:07.554079 1554352 start_flags.go:277] config:
	{Name:download-only-20210817014929-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210817014929-1554185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 01:50:07.556067 1554352 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0817 01:50:07.557508 1554352 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0817 01:50:07.557596 1554352 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 01:50:07.599315 1554352 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 01:50:07.599338 1554352 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 01:50:07.617143 1554352 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4
	I0817 01:50:07.617160 1554352 cache.go:56] Caching tarball of preloaded images
	I0817 01:50:07.617372 1554352 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0817 01:50:07.619854 1554352 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4 ...
	I0817 01:50:07.714473 1554352 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:54a0a9839942448749353ea5722c4adc -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4
	I0817 01:50:24.428680 1554352 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4 ...
	I0817 01:50:24.428765 1554352 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4 ...
	I0817 01:50:26.872215 1554352 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0817 01:50:26.872366 1554352 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/download-only-20210817014929-1554185/config.json ...
	I0817 01:50:26.872534 1554352 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0817 01:50:26.872755 1554352 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.22.0-rc.0/bin/linux/arm64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.0-rc.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/linux/v1.22.0-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210817014929-1554185"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-20210817014929-1554185
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 1.784606ms
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:343: "metrics-server-77c99ccb96-x8mh4" [634199c2-0567-4fb7-b5d6-2205aa4fcfd4] Running
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007379025s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210817015042-1554185 top pods -n kube-system
addons_test.go:374: kubectl --context addons-20210817015042-1554185 top pods -n kube-system: unexpected stderr: W0817 02:03:57.124023 1569823 top_pod.go:140] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210817015042-1554185 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.72s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (35.89s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210817015042-1554185 create -f testdata/busybox.yaml
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [96f6fe09-adf7-4035-917f-84f65860df10] Pending
helpers_test.go:343: "busybox" [96f6fe09-adf7-4035-917f-84f65860df10] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [96f6fe09-adf7-4035-917f-84f65860df10] Running
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 8.012995308s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210817015042-1554185 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:643: (dbg) Run:  kubectl --context addons-20210817015042-1554185 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:667: (dbg) Run:  kubectl --context addons-20210817015042-1554185 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:709: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210817015042-1554185 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:709: (dbg) Done: out/minikube-linux-arm64 -p addons-20210817015042-1554185 addons disable gcp-auth --alsologtostderr -v=1: (27.053613703s)
--- PASS: TestAddons/parallel/GCPAuth (35.89s)

                                                
                                    
x
+
TestCertOptions (84.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-20210817024728-1554185 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-20210817024728-1554185 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (1m20.978445026s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-20210817024728-1554185 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210817024728-1554185 config view
helpers_test.go:176: Cleaning up "cert-options-20210817024728-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-20210817024728-1554185
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-20210817024728-1554185: (2.874860029s)
--- PASS: TestCertOptions (84.37s)

                                                
                                    
x
+
TestForceSystemdFlag (56.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-20210817024631-1554185 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-20210817024631-1554185 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (53.403357807s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-20210817024631-1554185 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-20210817024631-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-20210817024631-1554185
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-20210817024631-1554185: (2.835389654s)
--- PASS: TestForceSystemdFlag (56.51s)

                                                
                                    
x
+
TestForceSystemdEnv (101.12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-20210817024449-1554185 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0817 02:46:17.928791 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
docker_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-20210817024449-1554185 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m38.391611527s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-20210817024449-1554185 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-20210817024449-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-20210817024449-1554185
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-20210817024449-1554185: (2.463897775s)
--- PASS: TestForceSystemdEnv (101.12s)

                                                
                                    
x
+
TestErrorSpam/setup (61.53s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-20210817020827-1554185 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210817020827-1554185 --driver=docker  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-arm64 start -p nospam-20210817020827-1554185 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210817020827-1554185 --driver=docker  --container-runtime=containerd: (1m1.528343777s)
error_spam_test.go:88: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (61.53s)

                                                
                                    
x
+
TestErrorSpam/start (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 start --dry-run
--- PASS: TestErrorSpam/start (0.87s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (6.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 pause: exit status 80 (4.511250707s)

                                                
                                                
-- stdout --
	* Pausing node nospam-20210817020827-1554185 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 8532a7a3c1803e688a3d6920608c5e7ca54c407ab977e3bd442732eccdf26455 a722bb963ae739feb59cf5137cf79ab0ecf3a4ce0da10e55fe5cfe38c1f8c74a: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-17T02:09:35Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭───────────────────────────────────────────────────────────────────────────────╮
	│                                                                               │
	│    * If the above advice does not help, please let us know:                   │
	│      https://github.com/kubernetes/minikube/issues/new/choose                 │
	│                                                                               │
	│    * Please attach the following file to the GitHub issue:                    │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_5.log    │
	│                                                                               │
	╰───────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 pause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 pause: (1.720714117s)
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 pause
--- PASS: TestErrorSpam/pause (6.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 unpause
--- PASS: TestErrorSpam/unpause (1.47s)

                                                
                                    
x
+
TestErrorSpam/stop (23.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 stop: (22.814920652s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210817020827-1554185 --log_dir /tmp/nospam-20210817020827-1554185 stop
--- PASS: TestErrorSpam/stop (23.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/test/nested/copy/1554185/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (121.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210817021007-1554185 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:1982: (dbg) Done: out/minikube-linux-arm64 start -p functional-20210817021007-1554185 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (2m1.433895819s)
--- PASS: TestFunctional/serial/StartWithProxy (121.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (16.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210817021007-1554185 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-linux-arm64 start -p functional-20210817021007-1554185 --alsologtostderr -v=8: (16.058806047s)
functional_test.go:631: soft start took 16.059250671s for "functional-20210817021007-1554185" cluster.
--- PASS: TestFunctional/serial/SoftStart (16.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210817021007-1554185 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Done: out/minikube-linux-arm64 -p functional-20210817021007-1554185 cache add k8s.gcr.io/pause:3.1: (2.238758337s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-linux-arm64 -p functional-20210817021007-1554185 cache add k8s.gcr.io/pause:3.3: (2.154921964s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-linux-arm64 -p functional-20210817021007-1554185 cache add k8s.gcr.io/pause:latest: (1.759809541s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210817021007-1554185 /tmp/functional-20210817021007-1554185601735238
functional_test.go:1024: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 cache add minikube-local-cache-test:functional-20210817021007-1554185
functional_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 cache delete minikube-local-cache-test:functional-20210817021007-1554185
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210817021007-1554185
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-arm64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (296.942065ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 cache reload
functional_test.go:1089: (dbg) Done: out/minikube-linux-arm64 -p functional-20210817021007-1554185 cache reload: (1.592151598s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-arm64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-arm64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 kubectl -- --context functional-20210817021007-1554185 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.40s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210817021007-1554185 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210817021007-1554185 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0817 02:13:14.884043 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:13:14.889701 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:13:14.899927 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:13:14.920139 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:13:14.960374 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:13:15.040669 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:13:15.201004 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:13:15.521529 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:13:16.162359 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:13:17.442621 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:13:20.003355 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
functional_test.go:715: (dbg) Done: out/minikube-linux-arm64 start -p functional-20210817021007-1554185 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.619135693s)
functional_test.go:719: restart took 46.619229305s for "functional-20210817021007-1554185" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (46.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210817021007-1554185 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 logs
functional_test.go:1165: (dbg) Done: out/minikube-linux-arm64 -p functional-20210817021007-1554185 logs: (1.232537265s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 logs --file /tmp/functional-20210817021007-1554185016174317/logs.txt
functional_test.go:1181: (dbg) Done: out/minikube-linux-arm64 -p functional-20210817021007-1554185 logs --file /tmp/functional-20210817021007-1554185016174317/logs.txt: (1.15967085s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 config get cpus
E0817 02:13:25.124587 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210817021007-1554185 config get cpus: exit status 14 (88.328553ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 config set cpus 2
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 config unset cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210817021007-1554185 config get cpus: exit status 14 (86.545998ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-20210817021007-1554185 --alsologtostderr -v=1]
2021/08/17 02:19:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:862: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-20210817021007-1554185 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 1588723: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210817021007-1554185 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-20210817021007-1554185 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (208.377847ms)

                                                
                                                
-- stdout --
	* [functional-20210817021007-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 02:19:18.942441 1588477 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:19:18.942611 1588477 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:19:18.942620 1588477 out.go:311] Setting ErrFile to fd 2...
	I0817 02:19:18.942624 1588477 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:19:18.942739 1588477 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:19:18.942988 1588477 out.go:305] Setting JSON to false
	I0817 02:19:18.943758 1588477 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36097,"bootTime":1629130662,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 02:19:18.943829 1588477 start.go:121] virtualization:  
	I0817 02:19:18.946001 1588477 out.go:177] * [functional-20210817021007-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 02:19:18.948008 1588477 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 02:19:18.949641 1588477 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:19:18.951229 1588477 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 02:19:18.953023 1588477 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 02:19:18.953411 1588477 config.go:177] Loaded profile config "functional-20210817021007-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:19:18.953840 1588477 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 02:19:18.998891 1588477 docker.go:132] docker version: linux-20.10.8
	I0817 02:19:18.998968 1588477 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:19:19.081355 1588477 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 02:19:19.028768831 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:19:19.081456 1588477 docker.go:244] overlay module found
	I0817 02:19:19.083355 1588477 out.go:177] * Using the docker driver based on existing profile
	I0817 02:19:19.083374 1588477 start.go:278] selected driver: docker
	I0817 02:19:19.083386 1588477 start.go:751] validating driver "docker" against &{Name:functional-20210817021007-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210817021007-1554185 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false r
egistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:19:19.083487 1588477 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 02:19:19.083520 1588477 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 02:19:19.083538 1588477 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0817 02:19:19.085237 1588477 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 02:19:19.086939 1588477 out.go:177] 
	W0817 02:19:19.087011 1588477 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0817 02:19:19.088855 1588477 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210817021007-1554185 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210817021007-1554185 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-20210817021007-1554185 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (213.543801ms)

                                                
                                                
-- stdout --
	* [functional-20210817021007-1554185] minikube v1.22.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 02:19:19.440564 1588589 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:19:19.440665 1588589 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:19:19.440677 1588589 out.go:311] Setting ErrFile to fd 2...
	I0817 02:19:19.440680 1588589 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:19:19.440881 1588589 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:19:19.441133 1588589 out.go:305] Setting JSON to false
	I0817 02:19:19.442006 1588589 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36098,"bootTime":1629130662,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 02:19:19.442077 1588589 start.go:121] virtualization:  
	I0817 02:19:19.444014 1588589 out.go:177] * [functional-20210817021007-1554185] minikube v1.22.0 sur Ubuntu 20.04 (arm64)
	I0817 02:19:19.446183 1588589 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 02:19:19.447884 1588589 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:19:19.449306 1588589 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 02:19:19.450687 1588589 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 02:19:19.451119 1588589 config.go:177] Loaded profile config "functional-20210817021007-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:19:19.451563 1588589 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 02:19:19.490728 1588589 docker.go:132] docker version: linux-20.10.8
	I0817 02:19:19.490830 1588589 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:19:19.581760 1588589 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 02:19:19.525099966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:19:19.581862 1588589 docker.go:244] overlay module found
	I0817 02:19:19.584174 1588589 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0817 02:19:19.584196 1588589 start.go:278] selected driver: docker
	I0817 02:19:19.584211 1588589 start.go:751] validating driver "docker" against &{Name:functional-20210817021007-1554185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210817021007-1554185 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false r
egistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 02:19:19.584339 1588589 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 02:19:19.584383 1588589 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 02:19:19.584407 1588589 out.go:242] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0817 02:19:19.586370 1588589 out.go:177]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 02:19:19.588391 1588589 out.go:177] 
	W0817 02:19:19.588483 1588589 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0817 02:19:19.590230 1588589 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 status
functional_test.go:815: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:826: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1355: (dbg) Run:  kubectl --context functional-20210817021007-1554185 create deployment hello-node --image=k8s.gcr.io/echoserver-arm:1.8
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210817021007-1554185 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6d98884d59-96mjk" [f6dec24d-1e5e-4df4-909e-f1329ca8d299] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6d98884d59-96mjk" [f6dec24d-1e5e-4df4-909e-f1329ca8d299] Running
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 11.044577609s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 service list
functional_test.go:1385: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 service --namespace=default --https --url hello-node
functional_test.go:1394: found endpoint: https://192.168.49.2:32669
functional_test.go:1405: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 service hello-node --url --format={{.IP}}
functional_test.go:1414: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 service hello-node --url
functional_test.go:1420: found endpoint for hello-node: http://192.168.49.2:32669
functional_test.go:1431: Attempting to fetch http://192.168.49.2:32669 ...
functional_test.go:1450: http://192.168.49.2:32669: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6d98884d59-96mjk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32669
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (12.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "echo hello"
functional_test.go:1515: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/1554185/hosts within VM
functional_test.go:1679: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo cat /etc/test/nested/copy/1554185/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/1554185.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo cat /etc/ssl/certs/1554185.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/1554185.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo cat /usr/share/ca-certificates/1554185.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/15541852.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo cat /etc/ssl/certs/15541852.pem"
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/15541852.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo cat /usr/share/ca-certificates/15541852.pem"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210817021007-1554185 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210817021007-1554185
functional_test.go:252: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 image load docker.io/library/busybox:load-functional-20210817021007-1554185
functional_test.go:252: (dbg) Done: out/minikube-linux-arm64 -p functional-20210817021007-1554185 image load docker.io/library/busybox:load-functional-20210817021007-1554185: (1.042127566s)
functional_test.go:373: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210817021007-1554185 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210817021007-1554185
--- PASS: TestFunctional/parallel/LoadImage (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210817021007-1554185
functional_test.go:344: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 image load docker.io/library/busybox:remove-functional-20210817021007-1554185

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Done: out/minikube-linux-arm64 -p functional-20210817021007-1554185 image load docker.io/library/busybox:remove-functional-20210817021007-1554185: (1.031814849s)
functional_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 image rm docker.io/library/busybox:remove-functional-20210817021007-1554185

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:387: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210817021007-1554185 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210817021007-1554185
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210817021007-1554185
functional_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/busybox.tar
functional_test.go:387: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210817021007-1554185 -- sudo crictl images
--- PASS: TestFunctional/parallel/LoadImageFromFile (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:446: (dbg) Stdout: out/minikube-linux-arm64 -p functional-20210817021007-1554185 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-20210817021007-1554185
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo systemctl is-active docker": exit status 1 (366.02176ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1774: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo systemctl is-active crio": exit status 1 (336.736744ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 version -o=json --components
functional_test.go:2016: (dbg) Done: out/minikube-linux-arm64 -p functional-20210817021007-1554185 version -o=json --components: (1.038768549s)
--- PASS: TestFunctional/parallel/Version/components (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-arm64 -p functional-20210817021007-1554185 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1206: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1245: Took "291.841208ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1259: Took "55.776197ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1295: Took "297.174423ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1308: Took "57.713442ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-arm64 -p functional-20210817021007-1554185 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-20210817021007-1554185 /tmp/mounttest969324145:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.637479ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-20210817021007-1554185 /tmp/mounttest969324145:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh "sudo umount -f /mount-9p": exit status 1 (284.144481ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-arm64 -p functional-20210817021007-1554185 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-20210817021007-1554185 /tmp/mounttest969324145:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210817021007-1554185
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210817021007-1554185
--- PASS: TestFunctional/delete_busybox_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210817021007-1554185
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210817021007-1554185
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-20210817022354-1554185 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-20210817022354-1554185 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.907502ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210817022354-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"b0e283cc-d81d-42e1-a509-f95956dc7b7f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"7e4b4208-8044-4e01-8abf-5e4791f6d396","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig"},"datacontenttype":"application/json","id":"a662ff49-fc44-4b6c-91e0-6289bba1c36e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube"},"datacontenttype":"application/json","id":"5f815941-2659-4f87-97c9-a866b766ca3e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"},"datacontenttype":"application/json","id":"269c2b0f-aa96-4ec0-a3d8-bf2fcbfeca71","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"fb2a12e4-7dbd-453a-b1e0-8078a310cc92","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210817022354-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-20210817022354-1554185
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (58.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-20210817022354-1554185 --network=
E0817 02:24:12.809225 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-20210817022354-1554185 --network=: (55.766362314s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210817022354-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-20210817022354-1554185
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-20210817022354-1554185: (2.214932841s)
--- PASS: TestKicCustomNetwork/create_custom_network (58.02s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (43.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-20210817022452-1554185 --network=bridge
E0817 02:24:53.774910 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-20210817022452-1554185 --network=bridge: (41.210338389s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210817022452-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-20210817022452-1554185
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-20210817022452-1554185: (2.10470927s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (43.35s)

                                                
                                    
x
+
TestKicExistingNetwork (44.66s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-20210817022536-1554185 --network=existing-network
E0817 02:26:15.695515 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-20210817022536-1554185 --network=existing-network: (42.248904241s)
helpers_test.go:176: Cleaning up "existing-network-20210817022536-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-20210817022536-1554185
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-20210817022536-1554185: (2.200787579s)
--- PASS: TestKicExistingNetwork (44.66s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (160.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210817022620-1554185 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0817 02:28:14.883764 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:28:31.848524 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
E0817 02:28:59.536467 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210817022620-1554185 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m40.320870633s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (160.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- rollout status deployment/busybox
multinode_test.go:467: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- rollout status deployment/busybox: (2.323195018s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- exec busybox-84b6686758-6h75s -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- exec busybox-84b6686758-k6rjs -- nslookup kubernetes.io
multinode_test.go:503: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- exec busybox-84b6686758-6h75s -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- exec busybox-84b6686758-k6rjs -- nslookup kubernetes.default
multinode_test.go:511: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- exec busybox-84b6686758-6h75s -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- exec busybox-84b6686758-k6rjs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- exec busybox-84b6686758-6h75s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- exec busybox-84b6686758-6h75s -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:529: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- exec busybox-84b6686758-k6rjs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210817022620-1554185 -- exec busybox-84b6686758-k6rjs -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (39.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-20210817022620-1554185 -v 3 --alsologtostderr
E0817 02:29:37.928615 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
multinode_test.go:106: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-20210817022620-1554185 -v 3 --alsologtostderr: (38.703692143s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (39.41s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status --output json --alsologtostderr
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 cp testdata/cp-test.txt multinode-20210817022620-1554185-m02:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 ssh -n multinode-20210817022620-1554185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 cp testdata/cp-test.txt multinode-20210817022620-1554185-m03:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 ssh -n multinode-20210817022620-1554185-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (21.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210817022620-1554185 node stop m03: (20.11106331s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status: exit status 7 (519.854058ms)

                                                
                                                
-- stdout --
	multinode-20210817022620-1554185
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210817022620-1554185-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210817022620-1554185-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status --alsologtostderr: exit status 7 (540.705828ms)

                                                
                                                
-- stdout --
	multinode-20210817022620-1554185
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210817022620-1554185-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210817022620-1554185-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 02:30:10.236215 1612570 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:30:10.236297 1612570 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:30:10.236307 1612570 out.go:311] Setting ErrFile to fd 2...
	I0817 02:30:10.236311 1612570 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:30:10.236443 1612570 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:30:10.236611 1612570 out.go:305] Setting JSON to false
	I0817 02:30:10.236642 1612570 mustload.go:65] Loading cluster: multinode-20210817022620-1554185
	I0817 02:30:10.236951 1612570 config.go:177] Loaded profile config "multinode-20210817022620-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:30:10.236969 1612570 status.go:253] checking status of multinode-20210817022620-1554185 ...
	I0817 02:30:10.237415 1612570 cli_runner.go:115] Run: docker container inspect multinode-20210817022620-1554185 --format={{.State.Status}}
	I0817 02:30:10.268432 1612570 status.go:328] multinode-20210817022620-1554185 host status = "Running" (err=<nil>)
	I0817 02:30:10.268453 1612570 host.go:66] Checking if "multinode-20210817022620-1554185" exists ...
	I0817 02:30:10.268739 1612570 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210817022620-1554185
	I0817 02:30:10.298219 1612570 host.go:66] Checking if "multinode-20210817022620-1554185" exists ...
	I0817 02:30:10.298495 1612570 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 02:30:10.298540 1612570 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210817022620-1554185
	I0817 02:30:10.328785 1612570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50349 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210817022620-1554185/id_rsa Username:docker}
	I0817 02:30:10.431323 1612570 ssh_runner.go:149] Run: systemctl --version
	I0817 02:30:10.434635 1612570 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:30:10.443878 1612570 kubeconfig.go:93] found "multinode-20210817022620-1554185" server: "https://192.168.49.2:8443"
	I0817 02:30:10.443902 1612570 api_server.go:164] Checking apiserver status ...
	I0817 02:30:10.443940 1612570 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 02:30:10.454760 1612570 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	I0817 02:30:10.460913 1612570 api_server.go:180] apiserver freezer: "6:freezer:/docker/d9552c538e73e7a3f29cbe2396637cf44c3984a7a3cd3cece6b06083f37c598d/kubepods/burstable/pod73bbfdb762e5f12283af307f71684421/fe61ae1c5906fb13ca31f5cca61dac72c3fecafbf7f2002c7593f51f2a6abad8"
	I0817 02:30:10.460980 1612570 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/d9552c538e73e7a3f29cbe2396637cf44c3984a7a3cd3cece6b06083f37c598d/kubepods/burstable/pod73bbfdb762e5f12283af307f71684421/fe61ae1c5906fb13ca31f5cca61dac72c3fecafbf7f2002c7593f51f2a6abad8/freezer.state
	I0817 02:30:10.466466 1612570 api_server.go:202] freezer state: "THAWED"
	I0817 02:30:10.466486 1612570 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 02:30:10.475043 1612570 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 02:30:10.475084 1612570 status.go:419] multinode-20210817022620-1554185 apiserver status = Running (err=<nil>)
	I0817 02:30:10.475103 1612570 status.go:255] multinode-20210817022620-1554185 status: &{Name:multinode-20210817022620-1554185 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0817 02:30:10.475121 1612570 status.go:253] checking status of multinode-20210817022620-1554185-m02 ...
	I0817 02:30:10.475414 1612570 cli_runner.go:115] Run: docker container inspect multinode-20210817022620-1554185-m02 --format={{.State.Status}}
	I0817 02:30:10.506524 1612570 status.go:328] multinode-20210817022620-1554185-m02 host status = "Running" (err=<nil>)
	I0817 02:30:10.506546 1612570 host.go:66] Checking if "multinode-20210817022620-1554185-m02" exists ...
	I0817 02:30:10.506908 1612570 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210817022620-1554185-m02
	I0817 02:30:10.538504 1612570 host.go:66] Checking if "multinode-20210817022620-1554185-m02" exists ...
	I0817 02:30:10.538793 1612570 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 02:30:10.538876 1612570 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210817022620-1554185-m02
	I0817 02:30:10.579748 1612570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210817022620-1554185-m02/id_rsa Username:docker}
	I0817 02:30:10.662624 1612570 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 02:30:10.671108 1612570 status.go:255] multinode-20210817022620-1554185-m02 status: &{Name:multinode-20210817022620-1554185-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0817 02:30:10.671135 1612570 status.go:253] checking status of multinode-20210817022620-1554185-m03 ...
	I0817 02:30:10.671444 1612570 cli_runner.go:115] Run: docker container inspect multinode-20210817022620-1554185-m03 --format={{.State.Status}}
	I0817 02:30:10.703045 1612570 status.go:328] multinode-20210817022620-1554185-m03 host status = "Stopped" (err=<nil>)
	I0817 02:30:10.703065 1612570 status.go:341] host is not running, skipping remaining checks
	I0817 02:30:10.703070 1612570 status.go:255] multinode-20210817022620-1554185-m03 status: &{Name:multinode-20210817022620-1554185-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (21.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:225: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:235: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 node start m03 --alsologtostderr
multinode_test.go:235: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210817022620-1554185 node start m03 --alsologtostderr: (30.230748948s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (201.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-20210817022620-1554185
multinode_test.go:271: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-20210817022620-1554185
multinode_test.go:271: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-20210817022620-1554185: (1m0.081916898s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210817022620-1554185 --wait=true -v=8 --alsologtostderr
E0817 02:33:14.884365 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:33:31.848252 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
multinode_test.go:276: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210817022620-1554185 --wait=true -v=8 --alsologtostderr: (2m20.925282181s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-20210817022620-1554185
--- PASS: TestMultiNode/serial/RestartKeepsNodes (201.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (24.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210817022620-1554185 node delete m03: (23.494300561s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status --alsologtostderr
multinode_test.go:395: (dbg) Run:  docker volume ls
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (24.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 stop
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210817022620-1554185 stop: (40.033247853s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status: exit status 7 (121.726757ms)

                                                
                                                
-- stdout --
	multinode-20210817022620-1554185
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210817022620-1554185-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status --alsologtostderr: exit status 7 (121.39904ms)

                                                
                                                
-- stdout --
	multinode-20210817022620-1554185
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210817022620-1554185-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 02:35:07.285927 1622050 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:35:07.286023 1622050 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:35:07.286034 1622050 out.go:311] Setting ErrFile to fd 2...
	I0817 02:35:07.286038 1622050 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:35:07.286183 1622050 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:35:07.286371 1622050 out.go:305] Setting JSON to false
	I0817 02:35:07.286402 1622050 mustload.go:65] Loading cluster: multinode-20210817022620-1554185
	I0817 02:35:07.286761 1622050 config.go:177] Loaded profile config "multinode-20210817022620-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0817 02:35:07.286778 1622050 status.go:253] checking status of multinode-20210817022620-1554185 ...
	I0817 02:35:07.287241 1622050 cli_runner.go:115] Run: docker container inspect multinode-20210817022620-1554185 --format={{.State.Status}}
	I0817 02:35:07.319997 1622050 status.go:328] multinode-20210817022620-1554185 host status = "Stopped" (err=<nil>)
	I0817 02:35:07.320021 1622050 status.go:341] host is not running, skipping remaining checks
	I0817 02:35:07.320027 1622050 status.go:255] multinode-20210817022620-1554185 status: &{Name:multinode-20210817022620-1554185 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0817 02:35:07.320056 1622050 status.go:253] checking status of multinode-20210817022620-1554185-m02 ...
	I0817 02:35:07.320361 1622050 cli_runner.go:115] Run: docker container inspect multinode-20210817022620-1554185-m02 --format={{.State.Status}}
	I0817 02:35:07.350276 1622050 status.go:328] multinode-20210817022620-1554185-m02 host status = "Stopped" (err=<nil>)
	I0817 02:35:07.350300 1622050 status.go:341] host is not running, skipping remaining checks
	I0817 02:35:07.350305 1622050 status.go:255] multinode-20210817022620-1554185-m02 status: &{Name:multinode-20210817022620-1554185-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (100.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:325: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:335: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210817022620-1554185 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:335: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210817022620-1554185 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m39.290864912s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210817022620-1554185 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (100.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (73.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-20210817022620-1554185
multinode_test.go:433: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210817022620-1554185-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-20210817022620-1554185-m02 --driver=docker  --container-runtime=containerd: exit status 14 (75.832048ms)

                                                
                                                
-- stdout --
	* [multinode-20210817022620-1554185-m02] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210817022620-1554185-m02' is duplicated with machine name 'multinode-20210817022620-1554185-m02' in profile 'multinode-20210817022620-1554185'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210817022620-1554185-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:441: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210817022620-1554185-m03 --driver=docker  --container-runtime=containerd: (1m9.830620446s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-20210817022620-1554185
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-20210817022620-1554185: exit status 80 (328.830568ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210817022620-1554185
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210817022620-1554185-m03 already exists in multinode-20210817022620-1554185-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-20210817022620-1554185-m03
multinode_test.go:453: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-20210817022620-1554185-m03: (2.80413882s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (73.10s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver (11.71s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
E0817 02:38:14.883817 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (11.70868381s)
--- PASS: TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver (11.71s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver (10.31s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (10.305022807s)
--- PASS: TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver (10.31s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:10/kvm2-driver (10s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
E0817 02:38:31.847588 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (10.001357109s)
--- PASS: TestDebPackageInstall/install_arm64_debian:10/kvm2-driver (10.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:9/kvm2-driver (8.72s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (8.723797522s)
--- PASS: TestDebPackageInstall/install_arm64_debian:9/kvm2-driver (8.72s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver (13.11s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (13.107805333s)
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver (13.11s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver (12.47s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (12.470530943s)
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver (12.47s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver (12.66s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (12.657957422s)
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver (12.66s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver (11.36s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (11.356660627s)
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver (11.36s)

                                                
                                    
x
+
TestScheduledStopUnix (110.66s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-20210817023935-1554185 --memory=2048 --driver=docker  --container-runtime=containerd
E0817 02:39:54.896903 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-20210817023935-1554185 --memory=2048 --driver=docker  --container-runtime=containerd: (1m6.769836923s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210817023935-1554185 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-20210817023935-1554185 -n scheduled-stop-20210817023935-1554185
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210817023935-1554185 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210817023935-1554185 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210817023935-1554185 -n scheduled-stop-20210817023935-1554185
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-20210817023935-1554185
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210817023935-1554185 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-20210817023935-1554185
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-20210817023935-1554185: exit status 7 (127.967399ms)

                                                
                                                
-- stdout --
	scheduled-stop-20210817023935-1554185
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210817023935-1554185 -n scheduled-stop-20210817023935-1554185
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210817023935-1554185 -n scheduled-stop-20210817023935-1554185: exit status 7 (98.438348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20210817023935-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-20210817023935-1554185
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-20210817023935-1554185: (5.251289515s)
--- PASS: TestScheduledStopUnix (110.66s)

                                                
                                    
x
+
TestInsufficientStorage (22.79s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-20210817024125-1554185 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-20210817024125-1554185 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (16.082716603s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210817024125-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"6d6d65f9-962f-476c-9820-5dd406d98876","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"d2df6fed-f7cd-4129-82d7-06cf08f0844d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig"},"datacontenttype":"application/json","id":"95b46d7a-967b-4bb8-88c8-7d04c30b4d36","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube"},"datacontenttype":"application/json","id":"bcb44c6d-8363-44b2-af39-77070b1bbedb","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"},"datacontenttype":"application/json","id":"d9c3de40-32b2-4d39-a293-eb24a0904f1d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"5d36d5b4-f8e1-4d79-a005-024012464bc8","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"46639967-3f93-4e56-bb77-0d62f75571ca","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"54a667f7-9458-4e9a-95e4-0ac8a5a05321","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"39baa053-6158-414e-88fe-975cc8b211ed","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210817024125-1554185 in cluster insufficient-storage-20210817024125-1554185","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"5c97d936-8882-4bcb-b44e-2508d31d066e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"167c64f0-f9ed-446b-a8c2-8220017dd37e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"b49c644e-1500-41cd-bbb0-ba206573034a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"8f5db00f-538e-40e3-80df-b1a6f578315e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-20210817024125-1554185 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-20210817024125-1554185 --output=json --layout=cluster: exit status 7 (268.786337ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210817024125-1554185","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210817024125-1554185","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 02:41:42.339456 1656473 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210817024125-1554185" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-20210817024125-1554185 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-20210817024125-1554185 --output=json --layout=cluster: exit status 7 (270.979612ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210817024125-1554185","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210817024125-1554185","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 02:41:42.611015 1656505 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210817024125-1554185" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	E0817 02:41:42.619932 1656505 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/insufficient-storage-20210817024125-1554185/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20210817024125-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-20210817024125-1554185
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-20210817024125-1554185: (6.170734459s)
--- PASS: TestInsufficientStorage (22.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (298.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210817024307-1554185 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0817 02:43:14.883469 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:43:31.847225 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210817024307-1554185 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m13.313619238s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-20210817024307-1554185

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-20210817024307-1554185: (20.269871509s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-20210817024307-1554185 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-20210817024307-1554185 status --format={{.Host}}: exit status 7 (114.70589ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210817024307-1554185 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:245: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210817024307-1554185 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (2m39.58948998s)
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210817024307-1554185 version --output=json
version_upgrade_test.go:269: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:271: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210817024307-1554185 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210817024307-1554185 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=containerd: exit status 106 (72.342888ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210817024307-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210817024307-1554185
	    minikube start -p kubernetes-upgrade-20210817024307-1554185 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210817024307-15541852 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210817024307-1554185 --kubernetes-version=v1.22.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:275: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210817024307-1554185 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:277: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210817024307-1554185 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.443262969s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210817024307-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-20210817024307-1554185
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-20210817024307-1554185: (3.617269959s)
--- PASS: TestKubernetesUpgrade (298.50s)

                                                
                                    
x
+
TestPause/serial/Start (127.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-arm64 start -p pause-20210817024148-1554185 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-arm64 start -p pause-20210817024148-1554185 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (2m7.090039489s)
--- PASS: TestPause/serial/Start (127.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (16.9s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-arm64 start -p pause-20210817024148-1554185 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:89: (dbg) Done: out/minikube-linux-arm64 start -p pause-20210817024148-1554185 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.884603219s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (16.90s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-20210817024148-1554185 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.93s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-20210817024148-1554185 --alsologtostderr -v=5
pause_test.go:129: (dbg) Done: out/minikube-linux-arm64 delete -p pause-20210817024148-1554185 --alsologtostderr -v=5: (2.92839722s)
--- PASS: TestPause/serial/DeletePaused (2.93s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:139: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (3.280059758s)
pause_test.go:165: (dbg) Run:  docker ps -a
pause_test.go:170: (dbg) Run:  docker volume inspect pause-20210817024148-1554185
pause_test.go:170: (dbg) Non-zero exit: docker volume inspect pause-20210817024148-1554185: exit status 1 (28.009638ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210817024148-1554185

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-arm64 start -p false-20210817024631-1554185 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-20210817024631-1554185 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (215.576504ms)

                                                
                                                
-- stdout --
	* [false-20210817024631-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 02:46:31.142034 1670752 out.go:298] Setting OutFile to fd 1 ...
	I0817 02:46:31.142124 1670752 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:46:31.142134 1670752 out.go:311] Setting ErrFile to fd 2...
	I0817 02:46:31.142137 1670752 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 02:46:31.142262 1670752 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0817 02:46:31.142514 1670752 out.go:305] Setting JSON to false
	I0817 02:46:31.143432 1670752 start.go:111] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37730,"bootTime":1629130662,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0817 02:46:31.143506 1670752 start.go:121] virtualization:  
	I0817 02:46:31.145947 1670752 out.go:177] * [false-20210817024631-1554185] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0817 02:46:31.147813 1670752 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 02:46:31.146834 1670752 notify.go:169] Checking for updates...
	I0817 02:46:31.149480 1670752 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0817 02:46:31.151027 1670752 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0817 02:46:31.152590 1670752 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0817 02:46:31.153048 1670752 config.go:177] Loaded profile config "kubernetes-upgrade-20210817024307-1554185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0817 02:46:31.153099 1670752 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 02:46:31.190591 1670752 docker.go:132] docker version: linux-20.10.8
	I0817 02:46:31.190673 1670752 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 02:46:31.292514 1670752 info.go:263] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:37 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-17 02:46:31.23587915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226258944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0817 02:46:31.292621 1670752 docker.go:244] overlay module found
	I0817 02:46:31.294592 1670752 out.go:177] * Using the docker driver based on user configuration
	I0817 02:46:31.294611 1670752 start.go:278] selected driver: docker
	I0817 02:46:31.294616 1670752 start.go:751] validating driver "docker" against <nil>
	I0817 02:46:31.294628 1670752 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0817 02:46:31.294680 1670752 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0817 02:46:31.294695 1670752 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0817 02:46:31.296330 1670752 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0817 02:46:31.298291 1670752 out.go:177] 
	W0817 02:46:31.298360 1670752 out.go:242] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0817 02:46:31.299818 1670752 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210817024631-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-20210817024631-1554185
--- PASS: TestNetworkPlugins/group/false (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (134.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-20210817024805-1554185 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0
E0817 02:48:14.884342 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:48:31.847918 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-20210817024805-1554185 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0: (2m14.683210997s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (134.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (122.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-different-port-20210817024852-1554185 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-different-port-20210817024852-1554185 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (2m2.496154041s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (122.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210817024805-1554185 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [dcd89c48-ff05-11eb-b750-02420e977974] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [dcd89c48-ff05-11eb-b750-02420e977974] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.030061166s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210817024805-1554185 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-20210817024805-1554185 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210817024805-1554185 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-20210817024805-1554185 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-20210817024805-1554185 --alsologtostderr -v=3: (20.232859903s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210817024805-1554185 -n old-k8s-version-20210817024805-1554185
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210817024805-1554185 -n old-k8s-version-20210817024805-1554185: exit status 7 (101.882516ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-20210817024805-1554185 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (7.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210817024852-1554185 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [ac070820-3904-48f2-aa25-d346c670fdee] Pending
helpers_test.go:343: "busybox" [ac070820-3904-48f2-aa25-d346c670fdee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [ac070820-3904-48f2-aa25-d346c670fdee] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 7.028216786s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210817024852-1554185 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (7.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-different-port-20210817024852-1554185 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210817024852-1554185 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-different-port-20210817024852-1554185 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-different-port-20210817024852-1554185 --alsologtostderr -v=3: (20.293299794s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210817024852-1554185 -n default-k8s-different-port-20210817024852-1554185
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210817024852-1554185 -n default-k8s-different-port-20210817024852-1554185: exit status 7 (87.099006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-different-port-20210817024852-1554185 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (340.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-different-port-20210817024852-1554185 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3
E0817 02:53:14.883593 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 02:53:31.847219 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
E0817 02:56:34.897901 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-different-port-20210817024852-1554185 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (5m40.529683434s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210817024852-1554185 -n default-k8s-different-port-20210817024852-1554185
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (340.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-h5wgx" [bea5fe6c-5029-44e1-b093-0759f8b51143] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019221347s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-h5wgx" [bea5fe6c-5029-44e1-b093-0759f8b51143] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005580586s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210817024852-1554185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-different-port-20210817024852-1554185 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (124s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-20210817025908-1554185 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3
E0817 03:00:55.534904 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
E0817 03:00:55.540114 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
E0817 03:00:55.550329 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
E0817 03:00:55.570524 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
E0817 03:00:55.610720 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
E0817 03:00:55.690971 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
E0817 03:00:55.851813 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
E0817 03:00:56.172279 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
E0817 03:00:56.813070 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
E0817 03:00:58.093253 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
E0817 03:01:00.653439 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
E0817 03:01:05.773562 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-20210817025908-1554185 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (2m4.001503048s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (124.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210817025908-1554185 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [e8009c54-b747-4ede-806b-0afb9f475cf9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0817 03:01:16.013925 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
helpers_test.go:343: "busybox" [e8009c54-b747-4ede-806b-0afb9f475cf9] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.027030419s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210817025908-1554185 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-20210817025908-1554185 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210817025908-1554185 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-20210817025908-1554185 --alsologtostderr -v=3
E0817 03:01:36.494272 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-20210817025908-1554185 --alsologtostderr -v=3: (20.24400263s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210817025908-1554185 -n embed-certs-20210817025908-1554185
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210817025908-1554185 -n embed-certs-20210817025908-1554185: exit status 7 (92.511871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-20210817025908-1554185 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (344.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-20210817025908-1554185 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3
E0817 03:02:17.454903 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-20210817025908-1554185 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (5m44.597458432s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210817025908-1554185 -n embed-certs-20210817025908-1554185
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (344.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-wrgrg" [4538c3b1-3b7e-491b-8874-738d8af30420] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021882064s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-wrgrg" [4538c3b1-3b7e-491b-8874-738d8af30420] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00714892s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210817025908-1554185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-20210817025908-1554185 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (86.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-20210817030748-1554185 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-20210817030748-1554185 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (1m26.00969224s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (86.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210817030748-1554185 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [82a18de8-ec87-46f0-bcc0-376f38fa9b3d] Pending

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [82a18de8-ec87-46f0-bcc0-376f38fa9b3d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [82a18de8-ec87-46f0-bcc0-376f38fa9b3d] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.038423796s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210817030748-1554185 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-20210817030748-1554185 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210817030748-1554185 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-20210817030748-1554185 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-20210817030748-1554185 --alsologtostderr -v=3: (20.30716464s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210817030748-1554185 -n no-preload-20210817030748-1554185
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210817030748-1554185 -n no-preload-20210817030748-1554185: exit status 7 (88.265281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-20210817030748-1554185 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (332.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-20210817030748-1554185 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-20210817030748-1554185 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (5m32.62476019s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210817030748-1554185 -n no-preload-20210817030748-1554185
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (332.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-l6fk8" [48225b4b-30c2-4ed3-9c80-858b0ce448b9] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.022638825s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-l6fk8" [48225b4b-30c2-4ed3-9c80-858b0ce448b9] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011654919s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210817030748-1554185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-20210817030748-1554185 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (84.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-20210817031538-1554185 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-20210817031538-1554185 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (1m24.554056352s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (84.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-20210817031538-1554185 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-20210817031538-1554185 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-20210817031538-1554185 --alsologtostderr -v=3: (20.386086398s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210817031538-1554185 -n newest-cni-20210817031538-1554185
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210817031538-1554185 -n newest-cni-20210817031538-1554185: exit status 7 (95.811142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-20210817031538-1554185 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-20210817031538-1554185 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-20210817031538-1554185 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (39.698258576s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210817031538-1554185 -n newest-cni-20210817031538-1554185
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-20210817031538-1554185 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (94.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p auto-20210817024630-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p auto-20210817024630-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (1m34.817123299s)
--- PASS: TestNetworkPlugins/group/auto/Start (94.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-20210817024630-1554185 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210817024630-1554185 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-t62dh" [c58240ba-c36f-4622-ae3e-ed76c3e16073] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-t62dh" [c58240ba-c36f-4622-ae3e-ed76c3e16073] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.202639878s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210817024630-1554185 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210817024630-1554185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210817024630-1554185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p calico-20210817024631-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd
E0817 03:20:55.534646 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
E0817 03:21:58.474180 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p calico-20210817024631-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: (1m38.632078169s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:343: "calico-node-lhgdm" [b06ff511-960f-45a7-a3ed-09b1f84fd34f] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.020848865s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-20210817024631-1554185 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context calico-20210817024631-1554185 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-xs9rk" [e562f02f-2fa6-4af8-9929-718e7bfb9859] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-xs9rk" [e562f02f-2fa6-4af8-9929-718e7bfb9859] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.024715235s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210817024631-1554185 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:181: (dbg) Run:  kubectl --context calico-20210817024631-1554185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:231: (dbg) Run:  kubectl --context calico-20210817024631-1554185 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (94.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p custom-weave-20210817024631-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd
E0817 03:23:14.884301 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210817015042-1554185/client.crt: no such file or directory
E0817 03:23:31.847209 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210817021007-1554185/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p custom-weave-20210817024631-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd: (1m34.079064627s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (94.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-weave-20210817024631-1554185 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210817024631-1554185 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-27v7z" [3d862879-99f3-494f-8eac-1d3981be5b26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0817 03:24:14.631114 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-27v7z" [3d862879-99f3-494f-8eac-1d3981be5b26] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 10.005809353s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (123.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-20210817024631-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd
E0817 03:24:42.315122 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210817030748-1554185/client.crt: no such file or directory
E0817 03:25:09.127126 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:09.132332 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:09.142555 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:09.162744 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:09.203007 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:09.283374 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:09.443559 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:09.764080 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:10.404931 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:11.685104 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:14.245776 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:19.366847 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:21.018051 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
E0817 03:25:21.023257 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
E0817 03:25:21.033437 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
E0817 03:25:21.053631 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
E0817 03:25:21.093812 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
E0817 03:25:21.174048 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
E0817 03:25:21.334360 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
E0817 03:25:21.654484 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
E0817 03:25:22.295269 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
E0817 03:25:23.575783 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
E0817 03:25:26.136013 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
E0817 03:25:29.607175 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:31.256483 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
E0817 03:25:41.497318 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
E0817 03:25:50.087735 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210817024630-1554185/client.crt: no such file or directory
E0817 03:25:55.534926 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210817024852-1554185/client.crt: no such file or directory
E0817 03:26:01.977484 1554185 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-1545958-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/old-k8s-version-20210817024805-1554185/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-20210817024631-1554185 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (2m3.912205794s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (123.91s)

                                                
                                    

Test skip (30/241)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (13.95s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-20210817015028-1554185 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:226: (dbg) Done: out/minikube-linux-arm64 start --download-only -p download-docker-20210817015028-1554185 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (13.307461032s)
aaa_download_only_test.go:238: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-20210817015028-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-20210817015028-1554185
--- SKIP: TestDownloadOnlyKic (13.95s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:398: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:46: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1541: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:467: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:36: skipping TestPreload - not yet supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210817030748-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-20210817030748-1554185
--- SKIP: TestStartStop/group/disable-driver-mounts (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210817024630-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-20210817024630-1554185
--- SKIP: TestNetworkPlugins/group/kubenet (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210817024630-1554185" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p flannel-20210817024630-1554185
--- SKIP: TestNetworkPlugins/group/flannel (0.27s)

                                                
                                    
Copied to clipboard